基于 SPI 机制实现 ShardingJDBC 的 Nacos 数据源配置
ShardingJDBC 的配置源路径 SPI 机制 由于最新版本 ShardingJDBC 不再支持基于 Nacos 作为配置数据源的功能,可以从 ShardingSphere JDBC 的依赖包中观察到,其数据源配置的获取接口 org.apache.shardingsphere.driver.jdbc.core.driver.ShardingSphereDriverURLProvider 采用了 SPI 机制,但其默认仅实现了以下几种配置获取方式: AbsolutePathDriverURLProvider(绝对路径)、ClasspathDriverURLProvider (Class Path 路径) 和 ApolloDriverURLProvider (Apollo 路径)。 因此需要自行基于 ShardingJDBC 的 SPI 机制对 ShardingJDBC 进行二次开发,以实现从 Nacos 获取数据源配置文件。 ShardingJDBC 配置获取的源码分析 根据该接口的包路径 org.apache.shardingsphere.driver.jdbc.core.driver ,可以找到其接口及其实现类,该接口包含两个方法: boolean accept(String url):校验当前 url 是否应该使用当前类获取配置 byte[] getContent(String url):从该 url 获取配置文件,并返回字节数组 如果将配置文件 db-sharding.yaml 放在 src/main/resources 目录下,并使用 classpath 作为配置文件的路径(如 spring.datasource.url=jdbc:shardingsphere:classpath:db-sharding.yaml),并在 ClasspathDriverURLProvider 类的 accept 方法中打断点并启动服务,可以发现在上层使用了 ShardingSphereDriverURLManager 类来对每个实现类进行遍历,分别调用 accept 校验当前 url 是否适用于当前类,若适用则调用其 getContent 方法从该 url 获取相应的配置文件。 ...
Docker 部署 Dubbo 微服务
Docker 部署 Nacos 注册中心 首先需要部署 Nacos 注册中心,在开发环境中运行 Nacos 单机模式,执行以下 Docker 命令拉取镜像并运行容器: docker run -d \ --name nacos-server \ -p 8848:8848 \ -p 9848:9848 \ -e MODE=standalone \ nacos/nacos-server:2.0.2 需要注意的是,若在开发环境中使用模拟域名(配置在 /etc/hosts 中)访问 Docker 容器中的 Nacos 服务,当开启了 socks 等代理服务时,可能导致访问失败。若使用了如 clash 等代理服务,可以在配置文件中关闭其 dns 功能,并配置该模拟域名为直连: dns: enable: false ... rules: - DOMAIN-SUFFIX,live.zjxjwxk.com,🎯 全球直连 若部署成功,可以通过 localhost:8848/nacos/index.html 访问 Nacos 控制台页面。 Maven 插件配置 在需要部署的微服务的 pom.xml 文件中配置如下插件,用于在 Maven 打包的同时构建 Docker 镜像: <build> <finalName>${artifactId}-docker</finalName> <plugins> <plugin> <groupId>com.spotify</groupId> <artifactId>docker-maven-plugin</artifactId> <version>1.2.0</version> <executions> <!--当mvn执行install操作的时候,执行docker的build--> <execution> <id>build</id> <phase>install</phase> <goals> <goal>build</goal> </goals> </execution> </executions> <configuration> <imageTags> <imageTag>${project.version}</imageTag> </imageTags> <imageName>${project.build.finalName}</imageName> <!--指定Dockerfile文件的位置--> <dockerDirectory>${project.basedir}/docker</dockerDirectory> <!--指定jar包路径,这里对应Dockerfile中复制jar包 到docker容器指定目录配置,也可以写到Dockerfile中--> <resources> <resource> <targetPath>/</targetPath> <!--将下边目录的内容,拷贝到docker镜像中--> <directory>${project.build.directory}</directory> <include>${project.build.finalName}.jar</include> </resource> </resources> </configuration> </plugin> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> 编写 Dockerfile 文件 在需要部署的微服务目录下创建 docker 文件夹,并在此文件夹下创建如下 Dockerfile 文件(路径与 pom.xml 中的 ${project.basedir}/docker 一致): ...
Docker 容器下的 MySQL 主从同步
创建宿主机 Docker 容器挂载目录 mkdir -p /var/lib/mysql/master1/conf mkdir -p /var/lib/mysql/master1/data mkdir -p /var/lib/mysql/slave1/conf mkdir -p /var/lib/mysql/slave1/data 创建 MySQL 主从配置文件 创建主库配置文件(宿主机路径为 /var/lib/mysql/master1/conf/my.cnf): [mysqld] character-set-server = utf8 lower-case-table-names = 1 # 主从复制,主机配置,主服务器唯一ID server-id = 1 # 启用二进制日志 log-bin=mysql-bin # 设置logbin格式 binlog_format = STATEMENT 创建从库配置文件(宿主机路径:/var/lib/mysql/slave1/conf/my.cnf): [mysqld] character-set-server = utf8 lower-case-table-names = 1 # 主从复制,从机配置,从服务器唯一ID server-id = 2 # 启用中继日志 relay-log = mysql-relay 授权文件夹: chmod -R 777 /var/lib/mysql 需要注意的是,主从的 my.cnf 文件权限不能为 777,否则 my.cnf 文件无法在容器内生效,在登陆 Docker 容器内的 MySQL 时会提示:mysql: [Warning] World-writable config file '/etc/mysql/my.cnf' is ignored。解决办法是修改宿主机上主从的 my.cnf 权限为更低的 644: ...
Kafka Administration
TrustStore and KeyStore in Kafka We need to have a secure deployment and encrypted communication on each of these four types of arrows. Nowadays, the TLS is mostly used to secure a connection via a CA (Certificate Authority): CA (Certificate Authority): The CA can sign other entities certificates. Client: The client put certificates in the Trust Store and doesn’t hold keys. Server: The server puts certificates, public keys, and private keys in the Key Store. ...
Kafka Stream
Microservices with Kafka To understand the logic behind doing microservices with Kafka, I believe it is important to take a smaller tour of how databases work. The engine uses a file called the write‑ahead log, or WAL. First, it writes the operation to the WAL. Then, it executes operations on the tables and indexes to reflect what the WAL said they should look like. We have normally two types of microservices: ...
Kafka Connect
ETL with Apache Kafka One important statement is that Kafka Connect is not an ETL(Extract, Load, Transform) solution itself, it only connects. But with the help of the correct plugins it can have some ETL capabilities. Connectors Source Connectors: Transfer data from a Source to Kafka. Sink Connectors: Transfer data from Kafka to a Sink. You can search through all of the connectors at thre registry: https://www.confluent.io/product/connectors. Standalone vs. Distributed ...
Kafka Consumer
Kafka Consumer Group Consumers are typically done as a group. A single consumer will end up inefficient with large amounts of data. A consumer may never catch up. Every consumer should be on it’s own machine, instance, pod. The consumer group ID is the key so Kafka knows that messages should be distributed to both consumers without duplicating. If we add one more consumer to this group, the last one will be idle, because one partition can’t be share across consumers. One partition can only be assigned to one consumer. Instead, the partitions are the way of Kafka to scale. More partitions imply you can have more consumers in the same consumer group. ...
Kafka Producer
Kafka Producer Overview Demo: Producing Messages with Kafka CLI Run Kafka Containers Create a docker-compose file docker-compose.yaml containing three Zookeepers, three Kafka Brokers, and a Kafka REST Proxy: --- version: '3' services: zookeeper-1: image: confluentinc/cp-zookeeper:7.4.1 hostname: zookeeper-1 container_name: zookeeper-1 volumes: - ./zookeeper-1_data:/var/lib/zookeeper/data - ./zookeeper-1_log:/var/lib/zookeeper/log environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 zookeeper-2: image: confluentinc/cp-zookeeper:7.4.1 hostname: zookeeper-2 container_name: zookeeper-2 volumes: - ./zookeeper-2_data:/var/lib/zookeeper/data - ./zookeeper-2_log:/var/lib/zookeeper/log environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 zookeeper-3: image: confluentinc/cp-zookeeper:7.4.1 hostname: zookeeper-3 container_name: zookeeper-3 volumes: - ./zookeeper-3_data:/var/lib/zookeeper/data - ./zookeeper-3_log:/var/lib/zookeeper/log environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 broker-1: image: confluentinc/cp-kafka:7.4.1 hostname: broker-1 container_name: broker-1 volumes: - ./broker-1-data:/var/lib/kafka/data depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 ports: - 9092:9092 - 29092:29092 environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_SNAPSHOT_TRUST_EMPTY: true broker-2: image: confluentinc/cp-kafka:7.4.1 hostname: broker-2 container_name: broker-2 volumes: - ./broker-2-data:/var/lib/kafka/data depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 - broker-1 ports: - 9093:9093 - 29093:29093 environment: KAFKA_BROKER_ID: 2 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_SNAPSHOT_TRUST_EMPTY: true broker-3: image: confluentinc/cp-kafka:7.4.1 hostname: broker-3 container_name: broker-3 volumes: - ./broker-3-data:/var/lib/kafka/data depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 - broker-1 - broker-2 ports: - 9094:9094 - 29094:29094 environment: KAFKA_BROKER_ID: 3 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_SNAPSHOT_TRUST_EMPTY: true rest-proxy: image: confluentinc/cp-kafka-rest:7.4.1 ports: - "8082:8082" depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 - broker-1 - broker-2 - broker-3 hostname: rest-proxy container_name: rest-proxy environment: KAFKA_REST_HOST_NAME: rest-proxy KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" Run composed containers: ...
Meet Kafka
Kafka Architecture Kafka Message Deploy Kafka Create a docker-compose file docker-compose.yaml containing three Zookeepers, three Kafka Brokers, and a Kafka REST Proxy: --- version: '3' services: zookeeper-1: image: confluentinc/cp-zookeeper:7.4.1 hostname: zookeeper-1 container_name: zookeeper-1 volumes: - ./zookeeper-1_data:/var/lib/zookeeper/data - ./zookeeper-1_log:/var/lib/zookeeper/log environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 zookeeper-2: image: confluentinc/cp-zookeeper:7.4.1 hostname: zookeeper-2 container_name: zookeeper-2 volumes: - ./zookeeper-2_data:/var/lib/zookeeper/data - ./zookeeper-2_log:/var/lib/zookeeper/log environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 zookeeper-3: image: confluentinc/cp-zookeeper:7.4.1 hostname: zookeeper-3 container_name: zookeeper-3 volumes: - ./zookeeper-3_data:/var/lib/zookeeper/data - ./zookeeper-3_log:/var/lib/zookeeper/log environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 broker-1: image: confluentinc/cp-kafka:7.4.1 hostname: broker-1 container_name: broker-1 volumes: - ./broker-1-data:/var/lib/kafka/data depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 ports: - 9092:9092 - 29092:29092 environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_SNAPSHOT_TRUST_EMPTY: true broker-2: image: confluentinc/cp-kafka:7.4.1 hostname: broker-2 container_name: broker-2 volumes: - ./broker-2-data:/var/lib/kafka/data depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 - broker-1 ports: - 9093:9093 - 29093:29093 environment: KAFKA_BROKER_ID: 2 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_SNAPSHOT_TRUST_EMPTY: true broker-3: image: confluentinc/cp-kafka:7.4.1 hostname: broker-3 container_name: broker-3 volumes: - ./broker-3-data:/var/lib/kafka/data depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 - broker-1 - broker-2 ports: - 9094:9094 - 29094:29094 environment: KAFKA_BROKER_ID: 3 KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL KAFKA_SNAPSHOT_TRUST_EMPTY: true rest-proxy: image: confluentinc/cp-kafka-rest:7.4.1 ports: - "8082:8082" depends_on: - zookeeper-1 - zookeeper-2 - zookeeper-3 - broker-1 - broker-2 - broker-3 hostname: rest-proxy container_name: rest-proxy environment: KAFKA_REST_HOST_NAME: rest-proxy KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" Run composed containers: ...
Pub-sub System
We Call The entity/app creates a message, a publisher, or a producer. The entity/app consuming messages from a channel as a consumer. The system where the channels live and handle these requests as an Event Bus or, more recently, a streaming platform. The channel where messages flow as channel or topic. Definitions We say the pub-sub system is reliable when you ensure there is no message loss. Has at most one processing when you ensure there is no message duplication. And has exactly one processing when you only process a message once ensuring it wasn’t lost. Of course, this is the holy grail. We Had a Ton of Other Pub/Sub Systems Before ...