ZooKeeper分布式锁的实现
在分布式的情况下,sychornized 和 Lock 已经不能满足我们的要求了,那么就需要使用第三方的锁了,这里我们就使用 ZooKeeper 来实现一个分布式锁
分布式锁方案比较
方案
实现思路
优点
缺点
利用 MySQL 的实现方案
利用数据库自身提供的锁机制实现,要求数据库支持行级锁
实现简单
性能差,无法适应高并发场景;容易出现死锁的情况;无法优雅的实现阻塞式锁
利用 Redis 的实现方案
使用 Setnx 和 lua 脚本机制实现,保证对缓存操作序列的原子性
性能好
实现相对复杂,有可能出现死锁;无法优雅的实现阻塞式锁
利用 ZooKeeper 的实现方案
基于 ZooKeeper 节点特性及 watch 机制实现
性能好,稳定可靠性高,能较好地实现阻塞式锁
实现相对复杂
ZooKeeper实现分布式锁 这里使用 ZooKeeper 来实现分布式锁,以50个并发请求来获取订单编号为例,描述两种方案,第一种为基础实现,第二种在第一种基础上进行了优化
方案一 流程描述:
ZooKeeper分布式锁1
具体代码: OrderNumGenerator:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 public class OrderNumGenerator { private static long count = 0 ; public String getOrderNumber () throws Exception { String date = DateTimeFormatter.ofPattern("yyyyMMddHHmmss" ).format(LocalDateTime.now()); String number = new DecimalFormat("000000" ).format(count++); return date + number; } }
Lock:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 public interface Lock { public void getLock () ; public void unLock () ; }
AbstractLock:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 public abstract class AbstractLock implements Lock { @Override public void getLock () { if (tryLock()) { System.out.println("--------获取到了自定义Lock锁的资源--------" ); } else { waitLock(); getLock(); } } public abstract boolean tryLock () ; public abstract void waitLock () ; }
ZooKeeperAbstractLock:
1 2 3 4 5 6 7 8 9 10 11 12 public abstract class ZooKeeperAbstractLock extends AbstractLock { private static final String SERVER_ADDR = "192.168.182.130:2181,192.168.182.131:2181,192.168.182.132:2181" ; protected ZkClient zkClient = new ZkClient(SERVER_ADDR); protected static final String PATH = "/lock" ; }
ZooKeeperDistrbuteLock:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 public class ZooKeeperDistrbuteLock extends ZooKeeperAbstractLock { private CountDownLatch countDownLatch = null ; @Override public boolean tryLock () { try { zkClient.createEphemeral(PATH); return true ; } catch (Exception e) { return false ; } } @Override public void waitLock () { IZkDataListener iZkDataListener = new IZkDataListener() { @Override public void handleDataChange (String s, Object o) throws Exception { } @Override public void handleDataDeleted (String s) throws Exception { if (countDownLatch != null ) { countDownLatch.countDown(); } } }; zkClient.subscribeDataChanges(PATH, iZkDataListener); if (zkClient.exists(PATH)) { countDownLatch = new CountDownLatch(1 ); try { countDownLatch.await(); } catch (InterruptedException e) { e.printStackTrace(); } } zkClient.unsubscribeDataChanges(PATH, iZkDataListener); } @Override public void unLock () { if (zkClient != null ) { System.out.println("释放锁资源" ); zkClient.delete(PATH); zkClient.close(); } } }
测试效果:使用50个线程来并发测试ZooKeeper实现的分布式锁
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 public class OrderService { private static class OrderNumGeneratorService implements Runnable { private OrderNumGenerator orderNumGenerator = new OrderNumGenerator();; private Lock lock = new ZooKeeperDistrbuteLock(); @Override public void run () { lock.getLock(); try { System.out.println(Thread.currentThread().getName() + ", 生成订单编号:" + orderNumGenerator.getOrderNumber()); } catch (Exception e) { e.printStackTrace(); } finally { lock.unLock(); } } } public static void main (String[] args) { System.out.println("----------生成唯一订单号----------" ); for (int i = 0 ; i < 50 ; i++) { new Thread(new OrderNumGeneratorService()).start(); } } }
方案二 方案二在方案一的基础上进行优化,避免产生“羊群效应”,方案一一旦临时节点删除,释放锁,那么其他在监听这个节点变化的线程,就会去竞争锁,同时访问 ZooKeeper,那么怎么更好的避免各线程的竞争现象呢,就是使用临时顺序节点,临时顺序节点排序,每个临时顺序节点只监听它本身的前一个节点变化
流程描述:
ZooKeeper分布式锁2
具体代码 具体只需要将方案一中的 ZooKeeperDistrbuteLock 改变,增加一个 ZooKeeperDistrbuteLock2,测试代码中使用 ZooKeeperDistrbuteLock2 即可测试,其他代码都不需要改变
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 public class ZooKeeperDistrbuteLock2 extends ZooKeeperAbstractLock { private CountDownLatch countDownLatch = null ; private String beforePath; private String currentPath; public ZooKeeperDistrbuteLock2 () { if (!zkClient.exists(PATH)) { zkClient.createPersistent(PATH); } } @Override public boolean tryLock () { if (currentPath == null || currentPath.length() == 0 ) { currentPath = zkClient.createEphemeralSequential(PATH + "/" , "lock" ); } List<String> childrenList = zkClient.getChildren(PATH); Collections.sort(childrenList); if (currentPath.equals(PATH + "/" + childrenList.get(0 ))) { return true ; } else { int wz = Collections.binarySearch(childrenList, currentPath.substring(6 )); beforePath = PATH + "/" + childrenList.get(wz - 1 ); } return false ; } @Override public void waitLock () { IZkDataListener iZkDataListener = new IZkDataListener() { @Override public void handleDataChange (String s, Object o) throws Exception { } @Override public void handleDataDeleted (String s) throws Exception { if (countDownLatch != null ) { countDownLatch.countDown(); } } }; zkClient.subscribeDataChanges(beforePath, iZkDataListener); if (zkClient.exists(beforePath)) { countDownLatch = new CountDownLatch(1 ); try { countDownLatch.await(); } catch (InterruptedException e) { e.printStackTrace(); } } zkClient.unsubscribeDataChanges(beforePath, iZkDataListener); } @Override public void unLock () { if (zkClient != null ) { System.out.println("释放锁资源" ); zkClient.delete(currentPath); zkClient.close(); } } }