-
-
分布式后:单体应用被拆分成微服务应用,原来的三个模块被拆分成三个独立的应用,分别使用三个独立的数据源,业务操作需要调用三个服务来完成,此时每个服务内部的数据一致性由本地事务来保证,但是全局的数据一致性问题没法保证。
以用户购买商品的业务逻辑为例,整个业务逻辑由3个微服务提供支持:
-
仓储服务:对给走的商品扣除仓储数量
-
订单服务:根据采购需要创建订单
-
分布式中一次业务操作需要跨多个数据源或需要跨多个系统进行远程调用,就会产生分布式事务问题。
2.1、是什么
-
Seata是一款开源的分布式事务解决方案,致力于在微服务架构下提供高性能和简单易用的分布式事务服务。
一个典型的分布式事务过程
-
分布式事务处理过程的ID+三组件模型
-
Transaction ID XID:全局唯一的事务ID
-
3组件概念
-
Transaction Coordinator(TC) :事务协调器,维护全局事务的运行状态,负责协调并驱动全局事务的提交或回滚;
-
Transaction Manager(TM) : 控制全局事务的边界,负责开启一个全局事务,并最终发起全局提交或全局回滚的决议;
-
Resource Manager(RM) :控制分支事务,负责分支注册,状态汇报,并接收事务协调器的指令,驱动分支(本地)事务的提交和回滚;
-
-
-
处理过程
-
TM向TC申请开启一个全局事务,全局事务创建成功并生成一个全局唯一的XID;
-
XID在微服务调用链路的上下文中传播;
-
RM向TC注册分支事务,将其纳入XID对应全局事务的管辖;
-
TM向TC发起针对XID的全局提交或回滚协议;
-
TC调度XID下管辖的全部分支事务完成提交或回滚请求。
2.3、下载
2.4、使用
-
Spring 本地@Transactional
-
全局@GlobalTransactional :SEATA的分布式交易解决方案
-
seata-server-0.9.0.zip解压到指定目录并修改conf目录下的file.conf配置文件
-
先备份原始file.conf文件
-
主要修改:自定义事务组名称+事务日志存储模式为db(默认存储在文件)+数据库连接信息
-
file.conf
-
service模块
-
-
vgroup_mapping.my_test_tx_group = "fsp_tx_group"
-
-
- store模块
-
mode = "db" url = "jdbc:mysql://127.0.0.1:3306/seata" user = "root" password = "root"
-
- mysql5.7数据库新建库seata
CREATE DATABASE seata
USE seata
- 修改seata-server-0.9.0seataconf目录下的registry.conf配置文件
目的是:指明注册中心为nacos,及修改nacos连接信息
registry { # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa type = "nacos" nacos { serverAddr = "localhost:8848" namespace = "" cluster = "default" }
四、订单/库存/账户业务数据库准备
以下演示都需要先启动Nacos后启动Seata,保证两个都OK 。Seata没启动报错no available server to connect
4.1、分布式事务业务说明
-
业务说明
这里我们创建三个服务,一个订单服务,一个库存服务,一个账户服务。
当用户下单时,会在订单服务中创建一个订单,然后通过远程调用库存服务来扣减下单商品的库存,再通过远程调用账户服务来扣减账户里面的余额,最后在订单服务中修改订单状态为已完成。
该操作跨越三个数据库,有两次远程调用,很明显会有分布式事务问题。
-
下订单-->扣库存-->减账户(余额)
4.2、创建业务数据库
-
seata_order: 存储订单的数据库
-
seata_storage:存储库存的数据库
-
seata_account: 存储账户信息的数据库
-
建表SQL
CREATE DATABASE seata_order; CREATE DATABASE seata_storage; CREATE DATABASE seata_account;
-
seata_order库下建t_order表
CREATE TABLE t_order( `id` BIGINT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY, `user_id` BIGINT(11) DEFAULT NULL COMMENT '用户id', `product_id` BIGINT(11) DEFAULT NULL COMMENT '产品id', `count` INT(11) DEFAULT NULL COMMENT '数量', `money` DECIMAL(11,0) DEFAULT NULL COMMENT '金额', `status` INT(1) DEFAULT NULL COMMENT '订单状态:0:创建中; 1:已完结' ) ENGINE=INNODB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8; SELECT * FROM t_order;
- seata_storage库下建t_storage表
CREATE TABLE t_storage ( `id` BIGINT (11) NOT NULL AUTO_INCREMENT PRIMARY KEY, `product_id` BIGINT (11) DEFAULT NULL COMMENT '产品id', `total` INT (11) DEFAULT NULL COMMENT '总库存', `used` INT (11) DEFAULT NULL COMMENT '已用库存', `residue` INT (11) DEFAULT NULL COMMENT '剩余库存' ) ENGINE = INNODB AUTO_INCREMENT = 2 DEFAULT CHARSET = utf8 ; INSERT INTO seata_storage.t_storage(`id`,`product_id`,`total`,`used`,`residue`) VALUES('1','1','100','0','100'); SELECT * FROM t_storage;
- seata_account库下建t_account表
CREATE TABLE t_account( `id` BIGINT(11) NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'id', `user_id` BIGINT(11) DEFAULT NULL COMMENT '用户id', `total` DECIMAL(10,0) DEFAULT NULL COMMENT '总额度', `used` DECIMAL(10,0) DEFAULT NULL COMMENT '已用余额', `residue` DECIMAL(10,0) DEFAULT '0' COMMENT '剩余可用额度' ) ENGINE=INNODB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; INSERT INTO seata_account.t_account(`id`,`user_id`,`total`,`used`,`residue`) VALUES('1','1','1000','0','1000'); SELECT * FROM t_account;
五、订单/库存/账户业务微服务准备
业务需求:下订单->减库存->扣余额->改(订单)状态
5.1、新建订单Order-Module
-
新建Spring Boot 的Module名称 seata-order-service2001
-
pom.xml添加如下依赖
<dependencies> <!--nacos--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!--seata--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> <exclusions> <exclusion> <artifactId>seata-all</artifactId> <groupId>io.seata</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.seata</groupId> <artifactId>seata-all</artifactId> <version>0.9.0</version> </dependency> <!--feign--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <!--web-actuator--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <!--mysql-druid--> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.37</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>1.1.10</version> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> </dependencies>
- 编写application.yml
server:
port: 2001
spring:
application:
name: seata-order-service
cloud:
alibaba:
seata:
#自定义事务组名称需要与seata-server中的对应
tx-service-group: jdy_tx_group
nacos:
discovery:
server-addr: localhost:8848
datasource:
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/seata_order
username: root
password: root
feign:
hystrix:
enabled: false
logging:
level:
io:
seata: info
mybatis:
mapperLocations: classpath:mapper/*.xml
- file.conf
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
shutdown {
# when destroy server, wait seconds
wait = 3
}
serialization = "seata"
compressor = "none"
}
service {
vgroup_mapping.jdy_tx_group = "default" #修改自定义事务组名称
default.grouplist = "127.0.0.1:8091"
enableDegrade = false
disable = false
max.commit.retry.timeout = "-1"
max.rollback.retry.timeout = "-1"
disableGlobalTransaction = false
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
report.retry.count = 5
tm.commit.retry.count = 1
tm.rollback.retry.count = 1
}
## transaction log store
store {
## store mode: file、db
mode = "db"
## file store
file {
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
max-branch-session-size = 16384
# globe session size , if exceeded throws exceptions
max-global-session-size = 512
# file buffer size , if exceeded allocate new buffer
file-write-buffer-cache-size = 16384
# when recover batch read size
session.reload.read_size = 100
# async, sync
flush-disk-mode = async
}
## database store
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "dbcp"
## mysql/oracle/h2/oceanbase etc.
db-type = "mysql"
driver-class-name = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://127.0.0.1:3306/seata"
user = "root"
password = "root"
min-conn = 1
max-conn = 3
global.table = "global_table"
branch.table = "branch_table"
lock-table = "lock_table"
query-limit = 100
}
}
lock {
## the lock store mode: local、remote
mode = "remote"
local {
## store locks in user's database
}
remote {
## store locks in the seata's server
}
}
recovery {
#schedule committing retry period in milliseconds
committing-retry-period = 1000
#schedule asyn committing retry period in milliseconds
asyn-committing-retry-period = 1000
#schedule rollbacking retry period in milliseconds
rollbacking-retry-period = 1000
#schedule timeout retry period in milliseconds
timeout-retry-period = 1000
}
transaction {
undo.data.validation = true
undo.log.serialization = "jackson"
undo.log.save.days = 7
#schedule delete expired undo_log in milliseconds
undo.log.delete.period = 86400000
undo.log.table = "undo_log"
}
## metrics settings
metrics {
enabled = false
registry-type = "compact"
# multi exporters use comma divided
exporter-list = "prometheus"
exporter-prometheus-port = 9898
}
support {
## spring
spring {
# auto proxy the DataSource bean
datasource.autoproxy = false
}
}
- registry.conf
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos"
nacos {
serverAddr = "localhost:8848"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "file"
nacos {
serverAddr = "localhost"
namespace = ""
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
- domain
@Data @AllArgsConstructor @NoArgsConstructor public class CommonResult<T>{ private Integer code; private String message; private T data; public CommonResult(Integer code, String message){ this(code,message,null); } }
@Data @AllArgsConstructor @NoArgsConstructor public class Order{ private Long id; private Long userId; private Long productId; private Integer count; private BigDecimal money; private Integer status; //订单状态:0:创建中;1:已完结 }
- Dao接口及实现
@Mapper public interface OrderDao{ //新建订单 void create(Order order); //修改订单状态,从零改为1 void update(@Param("userId") Long userId,@Param("status") Integer status); }
resources文件夹下新建mapper文件夹后添加 OrderMapper.xml
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" > <mapper namespace="com.jdy.springcloud.alibaba.dao.OrderDao"> <resultMap id="BaseResultMap" type="com.atgujdyingcloud.alibaba.domain.Order"> <id column="id" property="id" jdbcType="BIGINT"/> <result column="user_id" property="userId" jdbcType="BIGINT"/> <result column="product_id" property="productId" jdbcType="BIGINT"/> <result column="count" property="count" jdbcType="INTEGER"/> <result column="money" property="money" jdbcType="DECIMAL"/> <result column="status" property="status" jdbcType="INTEGER"/> </resultMap> <insert id="create"> insert into t_order (id,user_id,product_id,count,money,status) values (null,#{userId},#{productId},#{count},#{money},0); </insert> <update id="update"> update t_order set status = 1 where user_id=#{userId} and status = #{status}; </update> </mapper>
- Service接口及实现
public interface OrderService{ void create(Order order); }
@Service @Slf4j public class OrderServiceImpl implements OrderService{ @Resource private OrderDao orderDao; @Resource private StorageService storageService; @Resource private AccountService accountService; /** * 创建订单->调用库存服务扣减库存->调用账户服务扣减账户余额->修改订单状态 */ @Override @GlobalTransactional(name = "fsp-create-order",rollbackFor = Exception.class) public void create(Order order){ log.info("----->开始新建订单"); //新建订单 orderDao.create(order); //扣减库存 log.info("----->订单微服务开始调用库存,做扣减Count"); storageService.decrease(order.getProductId(),order.getCount()); log.info("----->订单微服务开始调用库存,做扣减end"); //扣减账户 log.info("----->订单微服务开始调用账户,做扣减Money"); accountService.decrease(order.getUserId(),order.getMoney()); log.info("----->订单微服务开始调用账户,做扣减end"); //修改订单状态,从零到1代表已经完成 log.info("----->修改订单状态开始"); orderDao.update(order.getUserId(),0); log.info("----->修改订单状态结束"); log.info("----->下订单结束了"); } }
- StorageService
@FeignClient(value = "seata-storage-service") public interface StorageService{ @PostMapping(value = "/storage/decrease") CommonResult decrease(@RequestParam("productId") Long productId, @RequestParam("count") Integer count); }
- AccountService
@FeignClient(value = "seata-account-service") public interface AccountService{ @PostMapping(value = "/account/decrease") CommonResult decrease(@RequestParam("userId") Long userId, @RequestParam("money") BigDecimal money); }
- Controller
@RestController public class OrderController{ @Resource private OrderService orderService; @GetMapping("/order/create") public CommonResult create(Order order){ orderService.create(order); return new CommonResult(200,""); } }
- 主启动
@EnableDiscoveryClient @EnableFeignClients @SpringBootApplication(exclude = DataSourceAutoConfiguration.class)//取消数据源自动创建的配置 @MapperScan({"com.jdy.springcloud.alibaba.dao"}) public class SeataOrderMainApp2001{ public static void main(String[] args){ SpringApplication.run(SeataOrderMainApp2001.class, args); } @Value("${mybatis.mapperLocations}") private String mapperLocations; @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource druidDataSource(){ return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy(DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations)); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } }
5.2、新建库存Storage-Module
-
新建Spring Boot 的Module名称 : seata-order-service2002
-
pom.xml添加如下依赖
<dependencies> <!--nacos--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!--seata--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> <exclusions> <exclusion> <artifactId>seata-all</artifactId> <groupId>io.seata</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.seata</groupId> <artifactId>seata-all</artifactId> <version>0.9.0</version> </dependency> <!--feign--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.37</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>1.1.10</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> </dependencies>
- 编写application.yml
server:
port: 2002
spring:
application:
name: seata-storage-service
cloud:
alibaba:
seata:
tx-service-group: jdy_tx_group
nacos:
discovery:
server-addr: localhost:8848
datasource:
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/seata_storage
username: root
password: 111111
logging:
level:
io:
seata: info
mybatis:
mapperLocations: classpath:mapper/*.xml
- file.conf
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
shutdown {
# when destroy server, wait seconds
wait = 3
}
serialization = "seata"
compressor = "none"
}
service {
#vgroup->rgroup
vgroup_mapping.jdy_tx_group = "default"
#only support single node
default.grouplist = "127.0.0.1:8091"
#degrade current not support
enableDegrade = false
#disable
disable = false
#unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
max.commit.retry.timeout = "-1"
max.rollback.retry.timeout = "-1"
disableGlobalTransaction = false
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
report.retry.count = 5
tm.commit.retry.count = 1
tm.rollback.retry.count = 1
}
transaction {
undo.data.validation = true
undo.log.serialization = "jackson"
undo.log.save.days = 7
#schedule delete expired undo_log in milliseconds
undo.log.delete.period = 86400000
undo.log.table = "undo_log"
}
support {
## spring
spring {
# auto proxy the DataSource bean
datasource.autoproxy = false
}
}
- registry.conf
registry {
# file 、nacos 、eureka、redis、zk
type = "nacos"
nacos {
serverAddr = "localhost:8848"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6381"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk
type = "file"
nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
apollo {
app.id = "fescar-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
file {
name = "file.conf"
}
}
- domain
@Data @AllArgsConstructor @NoArgsConstructor public class CommonResult<T>{ private Integer code; private String message; private T data; public CommonResult(Integer code, String message) { this(code,message,null); } }
@Data public class Storage { private Long id; // 产品id private Long productId; //总库存 private Integer total; //已用库存 private Integer used; //剩余库存 private Integer residue; }
- Dao接口及实现
@Mapper public interface StorageDao { //扣减库存信息 void decrease(@Param("productId") Long productId, @Param("count") Integer count); }
resources文件夹下新建mapper文件夹后添加 StorageMapper.xml
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" > <mapper namespace="com.jdy.springcloud.alibaba.dao.StorageDao"> <resultMap id="BaseResultMap" type="com.jdy.springcloud.alibaba.domain.Storage"> <id column="id" property="id" jdbcType="BIGINT"/> <result column="product_id" property="productId" jdbcType="BIGINT"/> <result column="total" property="total" jdbcType="INTEGER"/> <result column="used" property="used" jdbcType="INTEGER"/> <result column="residue" property="residue" jdbcType="INTEGER"/> </resultMap> <update id="decrease"> UPDATE t_storage SET used = used + #{count},residue = residue - #{count} WHERE product_id = #{productId} </update> </mapper>
- Service接口及实现
public interface StorageService { // 扣减库存 void decrease(Long productId, Integer count); }
@Service public class StorageServiceImpl implements StorageService { private static final Logger LOGGER = LoggerFactory.getLogger(StorageServiceImpl.class); @Resource private StorageDao storageDao; // 扣减库存 @Override public void decrease(Long productId, Integer count) { LOGGER.info("------->storage-service中扣减库存开始"); storageDao.decrease(productId,count); LOGGER.info("------->storage-service中扣减库存结束"); } }
- Controller
@RestController public class StorageController { @Autowired private StorageService storageService; //扣减库存 @RequestMapping("/storage/decrease") public CommonResult decrease(Long productId, Integer count) { storageService.decrease(productId, count); return new CommonRes库存成功!"); } }
- 主启动
@SpringBootApplication(exclude = DataSourceAutoConfiguration.class) @EnableDiscoveryClient @EnableFeignClients @MapperScan({"com.jdy.springcloud.alibaba.dao"}) public class SeataStorageServiceApplication2002{ public static void main(String[] args){ SpringApplication.run(SeataStorageServiceApplication2002.class, args); } @Value("${mybatis.mapperLocations}") private String mapperLocations; @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource druidDataSource(){ return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy(DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations)); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } }
-
新建Spring Boot 的Module名称 : seata-order-service2003
-
pom.xml添加如下依赖
<dependencies> <!--nacos--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <!--seata--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> <exclusions> <exclusion> <artifactId>seata-all</artifactId> <groupId>io.seata</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.seata</groupId> <artifactId>seata-all</artifactId> <version>0.9.0</version> </dependency> <!--feign--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.37</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid-spring-boot-starter</artifactId> <version>1.1.10</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <optional>true</optional> </dependency> </dependencies>
- 编写application.yml
server:
port: 2003
spring:
application:
name: seata-account-service
cloud:
alibaba:
seata:
tx-service-group: fsp_tx_group
nacos:
discovery:
server-addr: localhost:8848
datasource:
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost:3306/seata_account
username: root
password: 1111111
feign:
hystrix:
enabled: false
logging:
level:
io:
seata: info
mybatis:
mapperLocations: classpath:mapper/*.xml
- file.conf
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
shutdown {
# when destroy server, wait seconds
wait = 3
}
serialization = "seata"
compressor = "none"
}
service {
vgroup_mapping.jdy_tx_group = "default" #修改自定义事务组名称
default.grouplist = "127.0.0.1:8091"
enableDegrade = false
disable = false
max.commit.retry.timeout = "-1"
max.rollback.retry.timeout = "-1"
disableGlobalTransaction = false
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
report.retry.count = 5
tm.commit.retry.count = 1
tm.rollback.retry.count = 1
}
## transaction log store
store {
## store mode: file、db
mode = "db"
## file store
file {
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
max-branch-session-size = 16384
# globe session size , if exceeded throws exceptions
max-global-session-size = 512
# file buffer size , if exceeded allocate new buffer
file-write-buffer-cache-size = 16384
# when recover batch read size
session.reload.read_size = 100
# async, sync
flush-disk-mode = async
}
## database store
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "dbcp"
## mysql/oracle/h2/oceanbase etc.
db-type = "mysql"
driver-class-name = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://127.0.0.1:3306/seata"
user = "root"
password = "root"
min-conn = 1
max-conn = 3
global.table = "global_table"
branch.table = "branch_table"
lock-table = "lock_table"
query-limit = 100
}
}
lock {
## the lock store mode: local、remote
mode = "remote"
local {
## store locks in user's database
}
remote {
## store locks in the seata's server
}
}
recovery {
#schedule committing retry period in milliseconds
committing-retry-period = 1000
#schedule asyn committing retry period in milliseconds
asyn-committing-retry-period = 1000
#schedule rollbacking retry period in milliseconds
rollbacking-retry-period = 1000
#schedule timeout retry period in milliseconds
timeout-retry-period = 1000
}
transaction {
undo.data.validation = true
undo.log.serialization = "jackson"
undo.log.save.days = 7
#schedule delete expired undo_log in milliseconds
undo.log.delete.period = 86400000
undo.log.table = "undo_log"
}
## metrics settings
metrics {
enabled = false
registry-type = "compact"
# multi exporters use comma divided
exporter-list = "prometheus"
exporter-prometheus-port = 9898
}
support {
## spring
spring {
# auto proxy the DataSource bean
datasource.autoproxy = false
}
}
- registry.conf
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos"
nacos {
serverAddr = "localhost:8848"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "file"
nacos {
serverAddr = "localhost"
namespace = ""
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
- domain
@Data @AllArgsConstructor @NoArgsConstructor public class CommonResult<T>{ private Integer code; private String message; private T data; public CommonResult(Integer code, String message) { this(code,message,null); } }
@Data @AllArgsConstructor @NoArgsConstructor public class Account { private Long id; /** * 用户id */ private Long userId; /** * 总额度 */ private BigDecimal total; /** * 已用额度 */ private BigDecimal used; /** * 剩余额度 */ private BigDecimal residue; }
- Dao接口及实现
@Mapper public interface AccountDao { /** * 扣减账户余额 */ void decrease(@Param("userId") Long userId, @Param("money") BigDecimal money); }
resources文件夹下新建mapper文件夹后添加 AccountMapper.xml
<?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" > <mapper namespace="com.jdy.springcloud.alibaba.dao.AccountDao"> <resultMap id="BaseResultMap" type="com.jdy.springcloud.alibaba.domain.Account"> <id column="id" property="id" jdbcType="BIGINT"/> <result column="user_id" property="userId" jdbcType="BIGINT"/> <result column="total" property="total" jdbcType="DECIMAL"/> <result column="used" property="used" jdbcType="DECIMAL"/> <result column="residue" property="residue" jdbcType="DECIMAL"/> </resultMap> <update id="decrease"> UPDATE t_account SET residue = residue - #{money},used = used + #{money} WHERE user_id = #{userId}; </update> </mapper>
- Service接口及实现
public interface AccountService { /** * 扣减账户余额 */ void decrease(@RequestParam("userId") Long userId, @RequestParam("money") BigDecimal money); }
/** * 账户业务实现类 */ @Service public class AccountServiceImpl implements AccountService { private static final Logger LOGGER = LoggerFactory.getLogger(AccountServiceImpl.class); @Resource AccountDao accountDao; /** * 扣减账户余额 */ @Override public void decrease(Long userId, BigDecimal money) { LOGGER.info("------->account-service中扣减账户余额开始"); try { TimeUnit.SECONDS.sleep(20); } catch (InterruptedException e) { e.printStackTrace(); } accountDao.decrease(userId,money); LOGGER.info("------->account-service中扣减账户余额结束"); } }
- Controller
@RestController public class AccountController { @Resource AccountService accountService; /** * 扣减账户余额 */ @RequestMapping("/account/decrease") public CommonResult decrease(@RequestParam("userId") Long userId, @RequestParam("money") BigDecimal money){ accountService.decrease(userId,money); return new CommonResult(200,"扣减账户余额成功!"); } }
- 主启动
@SpringBootApplication(exclude = DataSourceAutoConfiguration.class) @EnableDiscoveryClient @EnableFeignClients @MapperScan({"com.jdy.springcloud.alibaba.dao"}) public class SeataAccountMainApp2003{ public static void main(String[] args){ SpringApplication.run(SeataAccountMainApp2003.class, args); } @Value("${mybatis.mapperLocations}") private String mapperLocations; @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource druidDataSource(){ return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy(DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources(mapperLocations)); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } }
下订单->减库存->扣余额->改(订单)状态
- 数据库初始情况
- 正常下单 http://localhost:2001/order/create?userid=1&producrid=1&counr=10&money=100
-
-
AccountServiceImpl添加超时
-
数据库情况
-
故障情况
-
当库存和账户余额扣减后,订单状态并没有设置为已经完成,没有从零改为1
-
而且由于feign的重试机制,账户余额还有可能被多次扣减
-
-
-
超时异常,添加@GlobalTransactional
-
AccountServiceImpl添加超时
-
OrderServiceImpl@GlobalTransactional
-
-
7.1、 Seata中的组件角色
-
TC(事务协调者,Transaction Coordinator):是 Server端,要单独部署,维护全局和分支事务的状态,驱动全局事务提交或回滚。
-
TM(Transaction Manager)和RM(Resource Manager) :是Client端,由业务系统集成。TM:定义全局事务,RM 管理分支事务处理的资源。
TM管着RM,统一的通过TC来协调
-
TM开启分布式事务(TM向TC注册全局事务记录)
-
按业务场景,编排数据库,服务等事务内资源(RM向TC汇报资源准备状态)
-
TM结束分布式事务,事务一阶段结束(TM通知TC提交/回滚分布式事务)
-
TC汇总事务信息,决定分布式事务是提交还是回滚
-
TC通知所有RM提交/回滚资源,事务二阶段结束。
7.2、Seata的事务模式
Seata针对不同的业务场景提供了四种不同的事务模式,具体如下
-
-
AT模式: AT 模式的一阶段、二阶段提交和回滚(借助undo_log表来实现)均由 Seata 框架自动生成,用户只需编写“业务SQL”,便能轻松接入分布式事务,AT 模式是一种对业务无任何侵入的分布式事务解决方案。
-
TTC模式: 相对于 AT 模式,TCC 模式对业务代码有一定的侵入性,但是 TCC 模式无 AT 模式的全局行锁,TCC 性能会比 AT模式高很多。适用于核心系统等对性能有很高要求的场景。
-
SAGA模式:Sage 是长事务解决方案,事务驱动,使用那种存在流程审核的业务场景,如: 金融行业,需要层层审核。
-
XA模式: XA模式是分布式强一致性的解决方案,但性能低而使用较少。
-
7.3、AT模式
AT模式,分为两个阶段
-
-
一阶段:业务数据和回滚日志记录在同一个本地事务中提交,释放本地锁和连接资源
-
二阶段:提交异步化 ( 或者事务执行失败,回滚通过一阶段的回滚日志进行反向补偿)
-
1、一阶段
在一阶段,Seata 会拦截“业务 SQL”,首先解析 SQL 语义,找到“业务 SQL”要更新的业务数据,在业务数据被更新前,将其保存成“before image”,然后执行“业务 SQL”更新业务数据,在业务数据更新之后,再将其保存成“after image”,最后生成行锁。以上操作全部在一个数据库事务内完成,这样保证了一阶段操作的原子性。
2、二阶段
执行失败,进行业务回滚
首先对比“数据库当前业务数据”和 “after image”,避免发生业务脏写(类似于CAS操作),完成校验后用“before image”还原业务数据,并且删除中间数据。
7.4、TCC模式
tcc模式主要可以分为三个阶段:
-
-
Try:做业务检查和资源预留
-
Confirm:确认提交
-
Cancel:业务执行错误需要回滚的状态下执行分支事务的业务取消,预留资源释放
-
TCC模式下常见的三种异常
1.空回滚
空回滚就是对于一个分布式事务,在没有调用 TCC 资源 Try 方法的情况下(如机器宕机、网络异常),调用了二阶段的 Cancel 方法,Cancel 方法需要识别出这是一个空回滚,然后直接返回成功。
解决方案
需要一张额外的事务控制表,其中有分布式事务 ID 和分支事务 ID,第一阶段 Try 方法里会插入一条记录,表示一阶段执行了。Cancel 接口里读取该记录,如果该记录存在,则正常回滚;如果该记录不存在,则是空回滚。
2.幂等
幂等就是对于同一个分布式事务的同一个分支事务,重复去调用该分支事务的第二阶段接口,因此,要求 TCC 的二阶段 Confirm 和 Cancel 接口保证幂等,不会重复使用或者释放资源。如果幂等控制没有做好,很有可能导致资损等严重问题。
解决方案
记录每个分支事务的执行状态。在执行前状态,如果已执行,那就不再执行;否则,正常执行。前面在讲空回滚的时候,已经有一张事务控制表了,事务控制表的每条记录关联一个分支事务,那我们完全可以在这张事务控制表上加一个状态字段,用来记录每个分支事务的执行状态。
3.悬挂
悬挂就是对于一个分布式事务,其二阶段 Cancel 接口比 Try 接口先执行。因为允许空回滚的原因,Cancel 接口认为 Try 接口没执行,空回滚直接返回成功,对于 Seata 框架来说,认为分布式事务的二阶段接口已经执行成功,整个分布式事务就结束了。
解决方案
二阶段执行时插入一条事务控制记录,状态为已回滚,这样当一阶段执行时,先读取该记录,如果记录存在,就认为二阶段已经执行;否则二阶段没执行。
7.5、saga模式
Saga模式是SEATA提供的长事务解决方案,在Saga模式中: 业务流程中每个参与者都提交本地事务,当出现某一个参与者失败则补偿前面已经成功的参与者,(发生失败成功的先关联的本地事务都会被回滚,可以理解为原子性) 一阶段正向服务和二阶段补偿服务都由业务开发实现。
目前SEATA提供的Saga模式是基于状态机引擎来实现的,机制是:
- 通过状态图来定义服务调用的流程并生成 json 状态语言定义文件
- 状态图中一个节点可以是调用一个服务,节点可以配置它的补偿节点
- 状态图 json 由状态机引擎驱动执行,当出现异常时状态引擎反向执行已成功节点对应的补偿节点将事务回滚
- 可以实现服务编排需求,支持单项选择、并发、子流程、参数转换、参数映射、服务执行状态判断、异常捕获等功能
Seata Saga 实现了一个状态机,可以编排服务的调用流程及正向服务的补偿服务,生成一个 json 文件定义的状态图,状态机引擎驱动到这个图的运行,当发生异常的时候状态机触发回滚,逐个执行补偿服务
同时Seata 也存在空回滚、幂等、悬挂问题。
7.6、XA模式
XA 模式是 Seata 将会开源的另一种无侵入的分布式事务解决方案
-
-
无侵入
-
将快照数据和行锁等通过 XA 指令委托给了数据库来完成
-
XA模式是分布式强一致性的解决方案,但性能低而使用较少。
Seata server在整个Seata中担任事务协调中的角色(即TC),维护全局和分支事务的状态,驱动全局事务提交或回滚。 具有 高可用、 高性能、高扩展性等特点。
-
- Coordinator Core:在最下面的模块是事务协调器核心代码,主要用来处理事务协调的逻辑,如是否commit,rollback等协调活动。
- Store:存储模块,用来将我们的数据持久化,防止重启或者宕机数据丢失。
- Discover: 服务注册/发现模块,用于将Server地址暴露给我们Client。
- Config: 用来存储和查找我们服务端的配置。
- Lock: 锁模块,用于给Seata提供全局锁的功能。
- Rpc:用于和其他端通信。
- HA-Cluster:高可用集群,目前还没开源。为Seata提供可靠的高可用功能。