interprocessmutex内部实现了zookeeper分布式锁的机制,所以接下来我们尝试使用这个工具来为我们的业务加上分布式锁处理的功能
zookeeper分布式锁的特点:1、分布式 2、公平锁 3、可重入
依赖
<dependency> <groupid>org.apache.zookeeper</groupid> <artifactid>zookeeper</artifactid> <version>3.4.10</version> </dependency> <!-- zookeeper 客户端 --> <dependency> <groupid>org.apache.curator</groupid> <artifactid>curator-framework</artifactid> <version>2.12.0</version> </dependency> <dependency> <groupid>org.apache.curator</groupid> <artifactid>curator-recipes</artifactid> <version>2.12.0</version> </dependency> <!-- lombok --> <dependency> <groupid>org.projectlombok</groupid> <artifactid>lombok</artifactid> <version>1.18.16</version> <scope>provided</scope> </dependency>
本地封装
这个工具类主要封装curatorframework这个client(连接zookeeper)
@slf4j public class curatorclientutil { private string zookeeperserver; @getter private curatorframework client; public curatorclientutil(string zookeeperserver) { this.zookeeperserver = zookeeperserver; } // 创建curatorframeworkfactory并且启动 public void init() { // 重试策略,等待1s,最大重试3次 retrypolicy retrypolicy = new exponentialbackoffretry(1000,3); this.client = curatorframeworkfactory.builder() .connectstring(zookeeperserver) .sessiontimeoutms(5000) .connectiontimeoutms(5000) .retrypolicy(retrypolicy) .build(); this.client.start(); } // 容器关闭,curatorframeworkfactory关闭 public void destroy() { try { if (objects.nonnull(getclient())) { getclient().close(); } } catch (exception e) { log.info("curatorframework close error=>{}", e.getmessage()); } } }
配置
@configuration public class curatorconfigration { @value("${zookeeper.server}") private string zookeeperserver; // 注入时,指定initmethod和destroymethod @bean(initmethod = "init", destroymethod = "destroy") public curatorclientutil curatorclientutil() { curatorclientutil clientutil = new curatorclientutil(zookeeperserver); return clientutil; } }
测试代码
模拟不同客户端的请求
@slf4j @restcontroller @requestmapping("/test") public class testcontroller { // 注入client工具类 @autowired private curatorclientutil curatorclientutil; // 在zookeeper的/rootlock节点下创建锁对应的临时有序节点 private string rootlock = "/rootlock"; @getmapping("/testlock") public object testlock() throws exception { // 获取当前线程的名字,方便观察那些线程在获取锁 string threadname = thread.currentthread().getname(); interprocessmutex mutex = new interprocessmutex(curatorclientutil.getclient(), rootlock); try { log.info("{}---获取锁start", threadname); // 尝试获取锁,最长等待3s,超时放弃获取 boolean lockflag = mutex.acquire(3000, timeunit.seconds); // 获取锁成功,进行业务处理 if (lockflag) { log.info("{}---获取锁success", threadname); // 模拟业务处理,时间为3s thread.sleep(3000); } else { log.info("{}---获取锁fail", threadname); } } catch (exception e) { log.info("{}---获取锁异常", threadname); } finally { // 业务处理完成,释放锁,唤醒比当前线程创建的节点序号大(最靠近)的线程获取锁 mutex.release(); log.info("{}---锁release", threadname); } return "线程:" + threadname + "执行完成"; } }
jmeter测试
我们使用jmeter模拟100个客户端同时并发的访问 localhost:8081/test/testlock,相当于100个客户端争抢分布式锁,结果如图右上角所示,100个请求花了5分6s,每个线程获取到锁后业务处理3s,100个线程理想时间为300s(thread.sleep(3000)),所以运行时间符合。
zookeeper每个线程在/roolock节点下创建的临时有序节点如下图,由于是临时的,所以线程释放锁后这些节点也会删除
100个线程程序日志打印
关于interprocessmutex内部如何实现zookeeper分布式锁,请看我写的这篇文章:
到此这篇关于springboot+zookeeper实现分布式锁的示例代码的文章就介绍到这了,更多相关springboot zookeeper分布式锁内容请搜索以前的文章或继续浏览下面的相关文章希望大家以后多多支持!