从框架源码中学习创建型设计模式

文章目录

  • 从框架源码中解读创建型设计模式
    • 工厂模式
      • 案例一:RocketMQ源码-创建Producer生产者
      • 案例二:RocketMQ源码-创建过滤器工厂
    • 抽象工厂
      • 案例一:Dubbo源码-创建缓存的抽象工厂
      • 案例二:RocketMQ源码-创建日志对象的抽象工厂
    • 单例模式
      • 面试官:单例有几种写法?
      • 案例一:dubbo源码-饿汉式
      • 案例二:RocketMQ源码-懒汉式-非线程安全
      • 案例三:双重检查锁
      • 案例四:线程安全synchronized方法
      • 案例五:枚举类
      • 案例六:静态内部类
    • 建造者模式
      • 案例:dubbo源码使用
    • 原型模式
      • 案例:RocketMQ源码-拷贝数组对象

从框架源码中解读创建型设计模式

概念:创建型设计模式顾名思义是用来创建对象的同时隐藏创建逻辑的方式,而不是通过new关键字直接实例化对象,使得程序判别某个对象是否需要创建时更加灵活。

工厂模式

工厂模式是最常见的设计模式。通过工厂封装对象创建逻辑提供一个接口供调用者创建对象。

案例一:RocketMQ源码-创建Producer生产者

public class ProducerFactory {public static DefaultMQProducer getRMQProducer(String ns) {DefaultMQProducer producer = new DefaultMQProducer(RandomUtil.getStringByUUID());producer.setInstanceName(UUID.randomUUID().toString());producer.setNamesrvAddr(ns);try {producer.start();} catch (MQClientException e) {e.printStackTrace();}return producer;}
}

每调用一次都会获得新的对象实例。

案例二:RocketMQ源码-创建过滤器工厂

public class FilterFactory {public static final FilterFactory INSTANCE = new FilterFactory();protected static final Map<String, FilterSpi> FILTER_SPI_HOLDER = new HashMap<String, FilterSpi>(4);static {FilterFactory.INSTANCE.register(new SqlFilter());}public void register(FilterSpi filterSpi) {if (FILTER_SPI_HOLDER.containsKey(filterSpi.ofType())) {throw new IllegalArgumentException(String.format("Filter spi type(%s) already exist!", filterSpi.ofType()));}FILTER_SPI_HOLDER.put(filterSpi.ofType(), filterSpi);}public FilterSpi unRegister(String type) {return FILTER_SPI_HOLDER.remove(type);}public FilterSpi get(String type) {return FILTER_SPI_HOLDER.get(type);}}

没创建完将对象缓存到本地内存中,之后从内存中获取对象

抽象工厂

用来创建工厂的工厂。

案例一:Dubbo源码-创建缓存的抽象工厂

先定义一个创建工厂的接口(@SPI:表示这个接口是基于SPI拓展接口)。

@SPI("lru")
public interface CacheFactory {@Adaptive("cache")Cache getCache(URL url, Invocation invocation);
}

定义一个抽象工厂实现通用的getCache()方法,并将对象管理逻辑定义在抽象类中并提供一个createCache()方法让工厂的实现类不用关系内部对象管理逻辑只需要实现createCache()方法即可。

public abstract class AbstractCacheFactory implements CacheFactory {private final ConcurrentMap<String, Cache> caches = new ConcurrentHashMap<String, Cache>();@Overridepublic Cache getCache(URL url, Invocation invocation) {url = url.addParameter(METHOD_KEY, invocation.getMethodName());String key = url.toFullString();Cache cache = caches.get(key);if (cache == null) {caches.put(key, createCache(url));cache = caches.get(key);}return cache;}protected abstract Cache createCache(URL url);}

实现抽象工厂的工厂实现类,缓存有多种实现方式有LRU、LFU等等

public class LruCacheFactory extends AbstractCacheFactory {@Overrideprotected Cache createCache(URL url) {return new LruCache(url);}}
public class LfuCacheFactory extends AbstractCacheFactory {@Overrideprotected Cache createCache(URL url) {return new LfuCache(url);}}

过期时间缓存实现工厂

public class ExpiringCacheFactory extends AbstractCacheFactory {@Overrideprotected Cache createCache(URL url) {return new ExpiringCache(url);}
}

案例二:RocketMQ源码-创建日志对象的抽象工厂

public abstract class InternalLoggerFactory {public static final String LOGGER_SLF4J = "slf4j";public static final String LOGGER_INNER = "inner";public static final String DEFAULT_LOGGER = LOGGER_SLF4J;private static String loggerType = null;//缓存容器private static ConcurrentHashMap<String, InternalLoggerFactory> loggerFactoryCache = new ConcurrentHashMap<String, InternalLoggerFactory>();//通过类获取对象实例public static InternalLogger getLogger(Class clazz) {return getLogger(clazz.getName());}//通过日志类型获取内部日志对象实例public static InternalLogger getLogger(String name) {return getLoggerFactory().getLoggerInstance(name);}//获取内部工厂对象private static InternalLoggerFactory getLoggerFactory() {InternalLoggerFactory internalLoggerFactory = null;if (loggerType != null) {internalLoggerFactory = loggerFactoryCache.get(loggerType);}if (internalLoggerFactory == null) {internalLoggerFactory = loggerFactoryCache.get(DEFAULT_LOGGER);}if (internalLoggerFactory == null) {internalLoggerFactory = loggerFactoryCache.get(LOGGER_INNER);}if (internalLoggerFactory == null) {throw new RuntimeException("[RocketMQ] Logger init failed, please check logger");}return internalLoggerFactory;}//设置当前日志类型public static void setCurrentLoggerType(String type) {loggerType = type;}//程序启动默认注册Slf4j工厂和内部日志工厂到loggerFactoryCache容器中static {try {new Slf4jLoggerFactory();} catch (Throwable e) {//ignore}try {new InnerLoggerFactory();} catch (Throwable e) {//ignore}}//注册工厂逻辑protected void doRegister() {String loggerType = getLoggerType();if (loggerFactoryCache.get(loggerType) != null) {return;}loggerFactoryCache.put(loggerType, this);}protected abstract void shutdown();//获取内部日志对象protected abstract InternalLogger getLoggerInstance(String name);//获取日志类型protected abstract String getLoggerType();
}

通过抽象工厂实现Slf4jLoggerFactory工厂

public class Slf4jLoggerFactory extends InternalLoggerFactory {public Slf4jLoggerFactory() {LoggerFactory.getILoggerFactory();doRegister();}@Overrideprotected String getLoggerType() {return InternalLoggerFactory.LOGGER_SLF4J;}@Overrideprotected InternalLogger getLoggerInstance(String name) {return new Slf4jLogger(name);}@Overrideprotected void shutdown() {}public static class Slf4jLogger implements InternalLogger {private Logger logger = null;public Slf4jLogger(String name) {logger = LoggerFactory.getLogger(name);}@Overridepublic String getName() {return logger.getName();}@Overridepublic void debug(String s) {logger.debug(s);}@Overridepublic void info(String s) {logger.info(s);}@Overridepublic void warn(String s) {logger.warn(s);}@Overridepublic void warn(String s, Throwable throwable) {logger.warn(s, throwable);}@Overridepublic void error(String s) {logger.error(s);}@Overridepublic void error(String s, Throwable throwable) {logger.error(s, throwable);}}
}

单例模式

单例模式(Singleton Pattern)是 Java 中最简单的设计模式之一。该类负责创建自己的对象,同时确保只有单个对象被创建,同时提供了一种访问其唯一对象的方式。

面试官:单例有几种写法?

  1. 饿汉式:类加载时就初始化,浪费内存
  2. 懒汉式-非线程安全:用到的时候才初始化
  3. 懒汉式-线程安全synchronized方法,每次都需要加载性能慢
  4. 懒汉式-双重检查锁,这种方式比第三种性能更高,但是每次都需要做判断,而且书写麻烦个人建议还不如用静态内部类方式
  5. 枚举:这是实现单例模式的最佳方法。它更简洁,自动支持序列化机制,绝对防止多次实例化。推荐使用
  6. 静态内部类:在主类中提供一个静态内部类在初始化时候创建对象,主类提供获取单例对象方法,用到的时候初始化,通过ClassLoader机制保证只有一个线程创建实例。推荐使用

案例一:dubbo源码-饿汉式

public class ShutdownHookCallbacks {public static final ShutdownHookCallbacks INSTANCE = new ShutdownHookCallbacks();private final List<ShutdownHookCallback> callbacks = new LinkedList<>();ShutdownHookCallbacks() {loadCallbacks();}public ShutdownHookCallbacks addCallback(ShutdownHookCallback callback) {synchronized (this) {this.callbacks.add(callback);}return this;}public Collection<ShutdownHookCallback> getCallbacks() {synchronized (this) {sort(this.callbacks);return this.callbacks;}}public void clear() {synchronized (this) {callbacks.clear();}}private void loadCallbacks() {ExtensionLoader<ShutdownHookCallback> loader =ExtensionLoader.getExtensionLoader(ShutdownHookCallback.class);loader.getSupportedExtensionInstances().forEach(this::addCallback);}public void callback() {getCallbacks().forEach(callback -> execute(callback::callback));}
}

案例二:RocketMQ源码-懒汉式-非线程安全

public class MQClientManager {private final static InternalLogger log = ClientLogger.getLog();private static MQClientManager instance = new MQClientManager();private AtomicInteger factoryIndexGenerator = new AtomicInteger();private ConcurrentMap<String/* clientId */, MQClientInstance> factoryTable =new ConcurrentHashMap<String, MQClientInstance>();private MQClientManager() {}public static MQClientManager getInstance() {return instance;}public MQClientInstance getOrCreateMQClientInstance(final ClientConfig clientConfig) {return getOrCreateMQClientInstance(clientConfig, null);}//懒汉式public MQClientInstance getOrCreateMQClientInstance(final ClientConfig clientConfig, RPCHook rpcHook) {String clientId = clientConfig.buildMQClientId();MQClientInstance instance = this.factoryTable.get(clientId);if (null == instance) {instance =new MQClientInstance(clientConfig.cloneClientConfig(),this.factoryIndexGenerator.getAndIncrement(), clientId, rpcHook);MQClientInstance prev = this.factoryTable.putIfAbsent(clientId, instance);if (prev != null) {instance = prev;log.warn("Returned Previous MQClientInstance for clientId:[{}]", clientId);} else {log.info("Created new MQClientInstance for clientId:[{}]", clientId);}}return instance;}public void removeClientFactory(final String clientId) {this.factoryTable.remove(clientId);}
}

特别注意:使用这种方式获取单例非线程安全的,那RocketMQ这样使用不是有错呢?如果只是单纯这样使用肯定是有错的,但是上层调用加了synchronized就没有问题,如下

public class DefaultMQPullConsumerImpl implements MQConsumerInner {...public synchronized void start() throws MQClientException {switch (this.serviceState) {case CREATE_JUST:this.serviceState = ServiceState.START_FAILED;this.checkConfig();this.copySubscription();if (this.defaultMQPullConsumer.getMessageModel() == MessageModel.CLUSTERING) {this.defaultMQPullConsumer.changeInstanceNameToPID();}this.mQClientFactory = MQClientManager.getInstance().getOrCreateMQClientInstance(this.defaultMQPullConsumer, this.rpcHook);...
}

案例三:双重检查锁

public class Singleton {  private volatile static Singleton singleton;  private Singleton (){}  public static Singleton getSingleton() {  if (singleton == null) {  synchronized (Singleton.class) {  if (singleton == null) {  singleton = new Singleton();  }  }  }  return singleton;  }  
}

案例四:线程安全synchronized方法

public class Singleton {  private static Singleton instance;  private Singleton (){}  public static synchronized Singleton getInstance() {  if (instance == null) {  instance = new Singleton();  }  return instance;  }  
}

案例五:枚举类

public enum Singleton {INSTANCE;public void doSomething() {System.out.println("doSomething");}}

案例六:静态内部类

public class Singleton {  private static class SingletonHolder {  private static final Singleton INSTANCE = new Singleton();  }  private Singleton (){}  public static final Singleton getInstance() {  return SingletonHolder.INSTANCE;  }  
}

建造者模式

使用多个简单的对象一步步构成复制对象。

案例:dubbo源码使用

package org.apache.dubbo.config.bootstrap.builders;import org.apache.dubbo.config.RegistryConfig;import java.util.Map;/*** This is a builder for build {@link RegistryConfig}.** @since 2.7*/
public class RegistryBuilder extends AbstractBuilder<RegistryConfig, RegistryBuilder> {/*** Register center address*/private String address;/*** Username to login register center*/private String username;/*** Password to login register center*/private String password;/*** Default port for register center*/private Integer port;/*** Protocol for register center*/private String protocol;/*** Network transmission type*/private String transporter;private String server;private String client;private String cluster;...public static RegistryBuilder newBuilder() {return new RegistryBuilder();}public RegistryBuilder id(String id) {return super.id(id);}public RegistryBuilder address(String address) {this.address = address;return getThis();}public RegistryBuilder username(String username) {this.username = username;return getThis();}public RegistryBuilder password(String password) {this.password = password;return getThis();}...public RegistryConfig build() {RegistryConfig registry = new RegistryConfig();super.build(registry);registry.setCheck(check);registry.setClient(client);registry.setCluster(cluster);registry.setDefault(isDefault);registry.setDynamic(dynamic);registry.setExtraKeys(extraKeys);registry.setFile(file);registry.setGroup(group);registry.setParameters(parameters);registry.setPassword(password);registry.setPort(port);registry.setProtocol(protocol);registry.setRegister(register);registry.setServer(server);registry.setSession(session);registry.setSimplified(simplified);registry.setSubscribe(subscribe);registry.setTimeout(timeout);registry.setTransporter(transporter);registry.setUsername(username);registry.setVersion(version);registry.setWait(wait);registry.setUseAsConfigCenter(useAsConfigCenter);registry.setUseAsMetadataCenter(useAsMetadataCenter);registry.setAccepts(accepts);registry.setPreferred(preferred);registry.setWeight(weight);registry.setAddress(address);return registry;}@Overrideprotected RegistryBuilder getThis() {return this;}
}

原型模式

用来拷贝对象,通过实现Cloneable接口中的clone()方法,使用时注意深拷贝、浅拷贝问题。

案例:RocketMQ源码-拷贝数组对象

package org.apache.rocketmq.filter.util;/*** Wrapper of bytes array, in order to operate single bit easily.*/
public class BitsArray implements Cloneable {private byte[] bytes;private int bitLength;public static BitsArray create(int bitLength) {return new BitsArray(bitLength);}private BitsArray(int bitLength) {this.bitLength = bitLength;// init bytesint temp = bitLength / Byte.SIZE;if (bitLength % Byte.SIZE > 0) {temp++;}bytes = new byte[temp];for (int i = 0; i < bytes.length; i++) {bytes[i] = (byte) 0x00;}}private BitsArray(byte[] bytes) {if (bytes == null || bytes.length < 1) {throw new IllegalArgumentException("Bytes is empty!");}this.bitLength = bytes.length * Byte.SIZE;this.bytes = new byte[bytes.length];System.arraycopy(bytes, 0, this.bytes, 0, this.bytes.length);}...public BitsArray clone() {byte[] clone = new byte[this.byteLength()];System.arraycopy(this.bytes, 0, clone, 0, this.byteLength());return create(clone, bitLength());}
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/508793.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

从框架源码中学习结构型设计模式

文章目录从框架源码学习结构型设计模式适配器模式应用实例案例一&#xff1a;dubbo框架日志适配器Logger接口日志实现类Logger适配器接口LoggerAdapter实现类Logger日志工厂桥接模式应用场景案例&#xff1a;dubbo源码-远程调用模块channelHandler设计ChannelHandler是一个SPI拓…

MDC日志logback整合使用

MDC日志logback整合使用 为什么使用MDC记录日志&#xff1f; 场景&#xff1a; 由于我的搜索服务并发量比较高&#xff0c;而处理一次搜索请求需要记录多个日志&#xff0c;因此日志特别多的情况下去查一次搜索整个日志打印情况会比较复杂。 解决方案&#xff1a; 可以使用用…

如何合理的配置线程数?

文章目录题记Java并发编程实战美团技术团队追求参数设置合理性线程池参数动态化题记 我想不管是在面试中、还是工作中&#xff0c;我们总会面临这种问题&#xff0c;那么到底有没有一种计算公式去告诉我们如何去配置呢&#xff1f; 答案是&#xff1a;没有 想要合理的配置线…

基于CompletableFuture并发任务编排实现

文章目录并发任务编排实现不带返回值/参数传递任务串行执行并行执行并行执行-自定义线程池阻塞等待&#xff1a;多并行任务执行完再执行任意一个任务并发执行完就执行下个任务串并行任务依赖场景带返回值/参数传递任务带返回值实现串行执行多线程任务串行执行对任务并行执行&am…

搜索研发工程师需要掌握的一些技能

文章目录基础语言数据结构与算法工程方面搜索相关搜索主要模块电商搜索流程分词相关搜索召回相似度算法相关词推荐排序相关国美搜索搜索算法工程师需要掌握的技能基础 语言 大部分公司用的是Solr、ElasticSearch&#xff0c;都是基于Java实现的&#xff0c;因此熟悉掌握Java语…

Flink入门看完这篇文章就够了

文章目录第一章&#xff1a;概述第一节&#xff1a;什么是Flink&#xff1f;第二节&#xff1a;Flink特点&#xff1f;第三节&#xff1a;Flink应用场景&#xff1f;第四节&#xff1a;Flink核心组成第五节&#xff1a;Flink处理模型&#xff1a;流处理和批处理第六节&#xff…

管理实践-教练技术的应用

文章目录简介课程学习的工具总结深度倾听3R原则倾听地图&#xff1a;开放式提问层次提问和SMART提问框架BIA积极性反馈GROW模型简介 最近在参加管理培训课程&#xff0c;学习《教练式指导》一课&#xff0c;现将内容总结分享一下。 课程学习的工具总结 深度倾听3R原则 工具…

spark整合MySQL

spark整合MySQL <dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.38</version></dependency>import java.sql.{Connection, DriverManager, PreparedStatement} import org…

DataFrame不同风格比较

DataFrame不同风格比较 一&#xff0c;DSL风格语法 //加载数据 val rdd1sc.textFile("/person.txt").map(x>x.split(" ")) //定义一个样例类 case class Person(id:String,name:String,age:Int) //把rdd与样例类进行关联 val personRDDrdd1.map(x>…

sparkSQL操作hiveSQL

sparkSQL操作hiveSQL <dependency><groupId>org.apache.spark</groupId><artifactId>spark-hive_2.11</artifactId><version>2.3.3</version></dependency>import org.apache.spark.sql.SparkSession//todo:利用sparksql操作h…

sparksql加载mysql表中的数据

sparksql加载mysql表中的数据 <dependency><groupId>mysql</groupId><artifactId>mysql-connector-java</artifactId><version>5.1.38</version> </dependency>import java.util.Propertiesimport org.apache.spark.SparkCon…

sparksql保存数据常见操作

sparksql保存数据操作 import org.apache.spark.SparkConf import org.apache.spark.sql.{DataFrame, SparkSession}//todo:sparksql可以把结果数据保存到不同的外部存储介质中 object SaveResult {def main(args: Array[String]): Unit {//1、创建SparkConf对象val sparkCon…

sparksql自定义函数

sparksql中自定义函数 import org.apache.spark.sql.api.java.UDF1 import org.apache.spark.sql.types.StringType import org.apache.spark.sql.{DataFrame, SparkSession}//TODO:自定义sparksql的UDF函数 一对一的关系 object SparkSQLFunction {def main(args: Array[S…

sparksql整合hive

sparksql整合hive 步骤 1、需要把hive安装目录下的配置文件hive-site.xml拷贝到每一个spark安装目录下对应的conf文件夹中2、需要一个连接mysql驱动的jar包拷贝到spark安装目录下对应的jars文件夹中3、可以使用spark-sql脚本 后期执行sql相关的任务 启动脚本 spark-sql \ --…

hive的一些常见内置函数

hive行转列 selectt1.base,concat_ws(|, collect_set(t1.name)) namefrom(selectname,concat(constellation, "," , blood_type) basefromperson_info) t1group byt1.base;hive列转行 select movie, category_name from movie_info lateral view explode(category)…

hive的一些调优参数

hive的一些调优参数 set hive.exec.dynamic.partition.modenonstrict; 使用动态分区 set hive.exec.max.dynamic.partitions100000;自动分区数最大值 set hive.exec.max.dynamic.partitions.pernode100000; set hive.hadoop.supports.splittable.combineinputformattrue;支持切…

hive的SerDe序列化

hive使用Serde进行行对象的序列与反序列化。最后实现把文件内容映射到 hive 表中的字段数据类型。 HDFS files –> InputFileFormat –> <key, value> –> Deserializer –> Row objectRow object –> Serializer –> <key, value> –> Outp…

窗口函数和hive优化简记

窗口函数&#xff1a; &#xff08;1&#xff09; OVER()&#xff1a;指定分析函数工作的数据窗口大小&#xff0c;这个数据窗口大小可能会随着行的变而变化。常用partition by 分区order by排序。 &#xff08;2&#xff09;CURRENT ROW&#xff1a;当前行 &#xff08;3&…

Kafka一些参数配置

Producer消息发送 producer.send(msg); // 用类似这样的方式去发送消息&#xff0c;就会把消息给你均匀的分布到各个分区上去 producer.send(key, msg); // 订单id&#xff0c;或者是用户id&#xff0c;他会根据这个key的hash值去分发到某个分区上去&#xff0c;他可以保证相同…

hive避免MR的情况

什么情况下Hive可以避免进行MapReduce hive 为了执行效率考虑&#xff0c;简单的查询&#xff0c;就是只是select&#xff0c;不带count,sum,group by这样的&#xff0c;都不走map/reduce&#xff0c;直接读取hdfs目录中的文件进行filter过滤。 sql select * from employee; …