强化学习DQN之俄罗斯方块

news/2024/4/25 21:34:58/文章来源:https://blog.csdn.net/qq128252/article/details/129145534

强化学习DQN之俄罗斯方块

  • 强化学习DQN之俄罗斯方块
    • 算法流程
    • 文件目录结构
    • 模型结构
    • 游戏环境
    • 训练代码
    • 测试代码
    • 结果展示

强化学习DQN之俄罗斯方块

算法流程

本项目目的是训练一个基于深度强化学习的俄罗斯方块。具体来说,这个代码通过以下步骤实现训练:

  • 首先设置一些随机数种子,以便在后面的训练中能够重现结果。
  • 创建一个俄罗斯方块环境实例,这个环境是一个俄罗斯方块游戏,用于模拟AI与游戏的交互。
  • 创建一个DeepQNetwork模型,这个模型是基于深度学习的强化学习模型,用于预测下一步的最佳行动。
  • 创建一个优化器(optimizer)和一个损失函数(criterion),用于训练模型。
  • 在每个训练时期(epoch)中,对于当前状态(state),计算所有可能的下一步状态(next_steps),根据一定的策略(exploration or exploitation)选择一个行动(action),并计算该行动带来的奖励(reward)和下一步是否为终止状态(done)。
  • 将当前状态、奖励、下一步状态和终止状态添加到回放内存(replay memory)中。
  • 如果当前状态为终止状态,则重置环境,并记录得分(final_score)、俄罗斯方块数量(final_tetrominoes)和消除的行数(final_cleared_lines)。
  • 从回放内存中随机选择一批样本(batch),并将其用于训练模型。具体来说,将状态批次(state_batch)、奖励批次(reward_batch)、下一步状态批次(next_state_batch)和是否为终止状态批次(done_batch)分别取出,并将其分别转换为张量(tensor)。然后计算每个样本的目标值(target)y_batch,并用它来计算损失值(loss),并将损失值的梯度反向传播(backpropagation)。最后,使用优化器来更新模型参数。
  • 输出当前训练时期的信息,并记录得分、俄罗斯方块数量和消除的行数到TensorBoard中。
  • 如果当前训练时期为某个特定数的倍数,将模型保存到硬盘中。
  • 重复上述步骤,直到达到指定的训练时期数。

文件目录结构

├── output.mp4
├── src
│   ├── deep_q_network.py                                    模型结构
│   └── tetris.py                                            游戏环境
├── tensorboard
│   └── events.out.tfevents.1676879249.aifs3-worker-2
├── test.py                                                  测试代码
├── trained_models                                           训练保存的模型
│   ├── tetris
│   ├── tetris_1000
│   ├── tetris_1500
│   ├── tetris_2000
│   └── tetris_500
└── train.py                                                 训练代码

模型结构

import torch.nn as nnclass DeepQNetwork(nn.Module):def __init__(self):super(DeepQNetwork, self).__init__()self.conv1 = nn.Sequential(nn.Linear(4, 64), nn.ReLU(inplace=True))self.conv2 = nn.Sequential(nn.Linear(64, 64), nn.ReLU(inplace=True))self.conv3 = nn.Sequential(nn.Linear(64, 1))self._create_weights()def _create_weights(self):for m in self.modules():if isinstance(m, nn.Linear):nn.init.xavier_uniform_(m.weight)nn.init.constant_(m.bias, 0)def forward(self, x):x = self.conv1(x)x = self.conv2(x)x = self.conv3(x)return x

游戏环境

import numpy as np
from PIL import Image
import cv2
from matplotlib import style
import torch
import randomstyle.use("ggplot")class Tetris:piece_colors = [(0, 0, 0),(255, 255, 0),(147, 88, 254),(54, 175, 144),(255, 0, 0),(102, 217, 238),(254, 151, 32),(0, 0, 255)]pieces = [[[1, 1],[1, 1]],[[0, 2, 0],[2, 2, 2]],[[0, 3, 3],[3, 3, 0]],[[4, 4, 0],[0, 4, 4]],[[5, 5, 5, 5]],[[0, 0, 6],[6, 6, 6]],[[7, 0, 0],[7, 7, 7]]]def __init__(self, height=20, width=10, block_size=20):self.height = heightself.width = widthself.block_size = block_sizeself.extra_board = np.ones((self.height * self.block_size, self.width * int(self.block_size / 2), 3),dtype=np.uint8) * np.array([204, 204, 255], dtype=np.uint8)self.text_color = (200, 20, 220)self.reset()#---------------------------------------------------------------------------------------#  重置游戏#---------------------------------------------------------------------------------------def reset(self):self.board = [[0] * self.width for _ in range(self.height)]self.score = 0self.tetrominoes = 0self.cleared_lines = 0self.bag = list(range(len(self.pieces)))random.shuffle(self.bag)self.ind = self.bag.pop()self.piece = [row[:] for row in self.pieces[self.ind]]self.current_pos = {"x": self.width // 2 - len(self.piece[0]) // 2, "y": 0}self.gameover = Falsereturn self.get_state_properties(self.board)#---------------------------------------------------------------------------------------#  旋转方块#---------------------------------------------------------------------------------------def rotate(self, piece):num_rows_orig = num_cols_new = len(piece)num_rows_new = len(piece[0])rotated_array = []for i in range(num_rows_new):new_row = [0] * num_cols_newfor j in range(num_cols_new):new_row[j] = piece[(num_rows_orig - 1) - j][i]rotated_array.append(new_row)return rotated_array#---------------------------------------------------------------------------------------#  获取当前游戏状态的一些属性#---------------------------------------------------------------------------------------def get_state_properties(self, board):lines_cleared, board = self.check_cleared_rows(board)holes = self.get_holes(board)bumpiness, height = self.get_bumpiness_and_height(board)return torch.FloatTensor([lines_cleared, holes, bumpiness, height])#---------------------------------------------------------------------------------------#  面板中空洞数量#---------------------------------------------------------------------------------------def get_holes(self, board):num_holes = 0for col in zip(*board):row = 0while row < self.height and col[row] == 0:row += 1num_holes += len([x for x in col[row + 1:] if x == 0])return num_holes#---------------------------------------------------------------------------------------#  计算游戏面板的凹凸度和亮度#---------------------------------------------------------------------------------------def get_bumpiness_and_height(self, board):board = np.array(board)mask = board != 0invert_heights = np.where(mask.any(axis=0), np.argmax(mask, axis=0), self.height)heights = self.height - invert_heightstotal_height = np.sum(heights)currs = heights[:-1]nexts = heights[1:]diffs = np.abs(currs - nexts)total_bumpiness = np.sum(diffs)return total_bumpiness, total_height#---------------------------------------------------------------------------------------#  获取下一个可能的状态#---------------------------------------------------------------------------------------def get_next_states(self):states = {}piece_id = self.indcurr_piece = [row[:] for row in self.piece]if piece_id == 0:  # O piecenum_rotations = 1elif piece_id == 2 or piece_id == 3 or piece_id == 4:num_rotations = 2else:num_rotations = 4for i in range(num_rotations):valid_xs = self.width - len(curr_piece[0])for x in range(valid_xs + 1):piece = [row[:] for row in curr_piece]pos = {"x": x, "y": 0}while not self.check_collision(piece, pos):pos["y"] += 1self.truncate(piece, pos)board = self.store(piece, pos)states[(x, i)] = self.get_state_properties(board)curr_piece = self.rotate(curr_piece)return states#---------------------------------------------------------------------------------------#  获取当前面板状态#---------------------------------------------------------------------------------------def get_current_board_state(self):board = [x[:] for x in self.board]for y in range(len(self.piece)):for x in range(len(self.piece[y])):board[y + self.current_pos["y"]][x + self.current_pos["x"]] = self.piece[y][x]return board#---------------------------------------------------------------------------------------#  添加新的方块#---------------------------------------------------------------------------------------def new_piece(self):if not len(self.bag):self.bag = list(range(len(self.pieces)))random.shuffle(self.bag)self.ind = self.bag.pop()self.piece = [row[:] for row in self.pieces[self.ind]]self.current_pos = {"x": self.width // 2 - len(self.piece[0]) // 2,"y": 0}if self.check_collision(self.piece, self.current_pos):self.gameover = True#---------------------------------------------------------------------------------------#  检查边界   输入:形状、位置#---------------------------------------------------------------------------------------def check_collision(self, piece, pos):future_y = pos["y"] + 1for y in range(len(piece)):for x in range(len(piece[y])):if future_y + y > self.height - 1 or self.board[future_y + y][pos["x"] + x] and piece[y][x]:return Truereturn Falsedef truncate(self, piece, pos):gameover = Falselast_collision_row = -1for y in range(len(piece)):for x in range(len(piece[y])):if self.board[pos["y"] + y][pos["x"] + x] and piece[y][x]:if y > last_collision_row:last_collision_row = yif pos["y"] - (len(piece) - last_collision_row) < 0 and last_collision_row > -1:while last_collision_row >= 0 and len(piece) > 1:gameover = Truelast_collision_row = -1del piece[0]for y in range(len(piece)):for x in range(len(piece[y])):if self.board[pos["y"] + y][pos["x"] + x] and piece[y][x] and y > last_collision_row:last_collision_row = yreturn gameoverdef store(self, piece, pos):board = [x[:] for x in self.board]for y in range(len(piece)):for x in range(len(piece[y])):if piece[y][x] and not board[y + pos["y"]][x + pos["x"]]:board[y + pos["y"]][x + pos["x"]] = piece[y][x]return boarddef check_cleared_rows(self, board):to_delete = []for i, row in enumerate(board[::-1]):if 0 not in row:to_delete.append(len(board) - 1 - i)if len(to_delete) > 0:board = self.remove_row(board, to_delete)return len(to_delete), boarddef remove_row(self, board, indices):for i in indices[::-1]:del board[i]board = [[0 for _ in range(self.width)]] + boardreturn boarddef step(self, action, render=True, video=None):x, num_rotations = actionself.current_pos = {"x": x, "y": 0}for _ in range(num_rotations):self.piece = self.rotate(self.piece)while not self.check_collision(self.piece, self.current_pos):self.current_pos["y"] += 1if render:self.render(video)overflow = self.truncate(self.piece, self.current_pos)if overflow:self.gameover = Trueself.board = self.store(self.piece, self.current_pos)lines_cleared, self.board = self.check_cleared_rows(self.board)score = 1 + (lines_cleared ** 2) * self.widthself.score += scoreself.tetrominoes += 1self.cleared_lines += lines_clearedif not self.gameover:self.new_piece()if self.gameover:self.score -= 2return score, self.gameoverdef render(self, video=None):if not self.gameover:img = [self.piece_colors[p] for row in self.get_current_board_state() for p in row]else:img = [self.piece_colors[p] for row in self.board for p in row]img = np.array(img).reshape((self.height, self.width, 3)).astype(np.uint8)img = img[..., ::-1]img = Image.fromarray(img, "RGB")img = img.resize((self.width * self.block_size, self.height * self.block_size))img = np.array(img)img[[i * self.block_size for i in range(self.height)], :, :] = 0img[:, [i * self.block_size for i in range(self.width)], :] = 0img = np.concatenate((img, self.extra_board), axis=1)cv2.putText(img, "Score:", (self.width * self.block_size + int(self.block_size / 2), self.block_size),fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=1.0, color=self.text_color)cv2.putText(img, str(self.score),(self.width * self.block_size + int(self.block_size / 2), 2 * self.block_size),fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=1.0, color=self.text_color)cv2.putText(img, "Pieces:", (self.width * self.block_size + int(self.block_size / 2), 4 * self.block_size),fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=1.0, color=self.text_color)cv2.putText(img, str(self.tetrominoes),(self.width * self.block_size + int(self.block_size / 2), 5 * self.block_size),fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=1.0, color=self.text_color)cv2.putText(img, "Lines:", (self.width * self.block_size + int(self.block_size / 2), 7 * self.block_size),fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=1.0, color=self.text_color)cv2.putText(img, str(self.cleared_lines),(self.width * self.block_size + int(self.block_size / 2), 8 * self.block_size),fontFace=cv2.FONT_HERSHEY_DUPLEX, fontScale=1.0, color=self.text_color)if video:video.write(img)cv2.imshow("Deep Q-Learning Tetris", img)cv2.waitKey(1)

训练代码

import argparse
import os
import shutil
from random import random, randint, sampleimport numpy as np
import torch
import torch.nn as nn
from tensorboardX import SummaryWriter
import time
from src.deep_q_network import DeepQNetwork
from src.tetris import Tetris
from collections import dequedef get_args():parser = argparse.ArgumentParser("""Implementation of Deep Q Network to play Tetris""")parser.add_argument("--width", type=int, default=10, help="The common width for all images")parser.add_argument("--height", type=int, default=20, help="The common height for all images")parser.add_argument("--block_size", type=int, default=30, help="Size of a block")parser.add_argument("--batch_size", type=int, default=512, help="The number of images per batch")parser.add_argument("--lr", type=float, default=1e-3)parser.add_argument("--gamma", type=float, default=0.99)parser.add_argument("--initial_epsilon", type=float, default=1)parser.add_argument("--final_epsilon", type=float, default=1e-3)parser.add_argument("--num_decay_epochs", type=float, default=2000)parser.add_argument("--num_epochs", type=int, default=3000)parser.add_argument("--save_interval", type=int, default=500)parser.add_argument("--replay_memory_size", type=int, default=30000,help="Number of epoches between testing phases")parser.add_argument("--log_path", type=str, default="tensorboard")parser.add_argument("--saved_path", type=str, default="trained_models")args = parser.parse_args()return argsdef train(opt):if torch.cuda.is_available():torch.cuda.manual_seed(123)else:torch.manual_seed(123)if os.path.isdir(opt.log_path):shutil.rmtree(opt.log_path)os.makedirs(opt.log_path)writer = SummaryWriter(opt.log_path)env = Tetris(width=opt.width, height=opt.height, block_size=opt.block_size)model = DeepQNetwork()optimizer = torch.optim.Adam(model.parameters(), lr=opt.lr)criterion = nn.MSELoss()state = env.reset()if torch.cuda.is_available():model.cuda()state = state.cuda()replay_memory = deque(maxlen=opt.replay_memory_size)epoch = 0t1 = time.time()total_time = 0best_score = 1000while epoch < opt.num_epochs:start_time = time.time()next_steps = env.get_next_states()# Exploration or exploitationepsilon = opt.final_epsilon + (max(opt.num_decay_epochs - epoch, 0) * (opt.initial_epsilon - opt.final_epsilon) / opt.num_decay_epochs)u = random()random_action = u <= epsilonnext_actions, next_states = zip(*next_steps.items())next_states = torch.stack(next_states)if torch.cuda.is_available():next_states = next_states.cuda()model.eval()with torch.no_grad():predictions = model(next_states)[:, 0]model.train()if random_action:index = randint(0, len(next_steps) - 1)else:index = torch.argmax(predictions).item()next_state = next_states[index, :]action = next_actions[index]reward, done = env.step(action, render=True)if torch.cuda.is_available():next_state = next_state.cuda()replay_memory.append([state, reward, next_state, done])if done:final_score = env.scorefinal_tetrominoes = env.tetrominoesfinal_cleared_lines = env.cleared_linesstate = env.reset()if torch.cuda.is_available():state = state.cuda()else:state = next_statecontinueif len(replay_memory) < opt.replay_memory_size / 10:continueepoch += 1batch = sample(replay_memory, min(len(replay_memory), opt.batch_size))state_batch, reward_batch, next_state_batch, done_batch = zip(*batch)state_batch = torch.stack(tuple(state for state in state_batch))reward_batch = torch.from_numpy(np.array(reward_batch, dtype=np.float32)[:, None])next_state_batch = torch.stack(tuple(state for state in next_state_batch))if torch.cuda.is_available():state_batch = state_batch.cuda()reward_batch = reward_batch.cuda()next_state_batch = next_state_batch.cuda()print("state_batch",state_batch.shape)q_values = model(state_batch)model.eval()with torch.no_grad():next_prediction_batch = model(next_state_batch)model.train()y_batch = torch.cat(tuple(reward if done else reward + opt.gamma * prediction for reward, done, prediction inzip(reward_batch, done_batch, next_prediction_batch)))[:, None]optimizer.zero_grad()loss = criterion(q_values, y_batch)loss.backward()optimizer.step()end_time = time.time()use_time = end_time-t1 -total_timetotal_time = end_time-t1print("Epoch: {}/{}, Action: {}, Score: {}, Tetrominoes {}, Cleared lines: {}, Used time: {}, total used time: {}".format(epoch,opt.num_epochs,action,final_score,final_tetrominoes,final_cleared_lines,use_time,total_time))writer.add_scalar('Train/Score', final_score, epoch - 1)writer.add_scalar('Train/Tetrominoes', final_tetrominoes, epoch - 1)writer.add_scalar('Train/Cleared lines', final_cleared_lines, epoch - 1)if epoch > 0 and epoch % opt.save_interval == 0:print("save interval model: {}".format(epoch))torch.save(model, "{}/tetris_{}".format(opt.saved_path, epoch))elif final_score>best_score:best_score = final_scoreprint("save best model: {}".format(best_score))torch.save(model, "{}/tetris_{}".format(opt.saved_path, best_score))if __name__ == "__main__":opt = get_args()train(opt)

测试代码

import argparse
import torch
import cv2
from src.tetris import Tetrisdef get_args():parser = argparse.ArgumentParser("""Implementation of Deep Q Network to play Tetris""")parser.add_argument("--width", type=int, default=10, help="The common width for all images")parser.add_argument("--height", type=int, default=20, help="The common height for all images")parser.add_argument("--block_size", type=int, default=30, help="Size of a block")parser.add_argument("--fps", type=int, default=300, help="frames per second")parser.add_argument("--saved_path", type=str, default="trained_models")parser.add_argument("--output", type=str, default="output.mp4")args = parser.parse_args()return argsdef test(opt):if torch.cuda.is_available():torch.cuda.manual_seed(123)else:torch.manual_seed(123)if torch.cuda.is_available():model = torch.load("{}/tetris_2000".format(opt.saved_path))else:model = torch.load("{}/tetris_2000".format(opt.saved_path), map_location=lambda storage, loc: storage)model.eval()env = Tetris(width=opt.width, height=opt.height, block_size=opt.block_size)env.reset()if torch.cuda.is_available():model.cuda()out = cv2.VideoWriter(opt.output, cv2.VideoWriter_fourcc(*"MJPG"), opt.fps,(int(1.5*opt.width*opt.block_size), opt.height*opt.block_size))while True:next_steps = env.get_next_states()next_actions, next_states = zip(*next_steps.items())next_states = torch.stack(next_states)if torch.cuda.is_available():next_states = next_states.cuda()predictions = model(next_states)[:, 0]index = torch.argmax(predictions).item()action = next_actions[index]_, done = env.step(action, render=True, video=out)if done:out.release()breakif __name__ == "__main__":opt = get_args()test(opt)

结果展示

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.luyixian.cn/news_show_72289.aspx

如若内容造成侵权/违法违规/事实不符,请联系dt猫网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

车机开发【Android SystemUI 架构音量控制详解】

SystemUI介绍 SystemUI摘要 在Android系统中SystemUI是以应用的形式运行在Android系统当中&#xff0c;即编译SystemUI模块会生产APK文件&#xff0c;源代码路径在frameworks/base/packages/SystemUI/&#xff0c;安装路径system/priv-app/-SystemUI。 什么是SystemUI 在前…

使用带有 Moveit 的深度相机来避免碰撞

文章目录 什么是深度相机?如何将 Kinect 深度相机添加到您的环境中在 Rviz 中可视化深度相机数据在取放场景中使用深度相机将深度相机与您的 Moveit 设置一起使用有很多优势。机器人可以避免未知环境中的碰撞,甚至可以对周围的变化做出反应。然而,将深度相机连接到您的设置并…

FlinkSQL行级权限解决方案及源码

FlinkSQL的行级权限解决方案及源码&#xff0c;支持面向用户级别的行级数据访问控制&#xff0c;即特定用户只能访问授权过的行&#xff0c;隐藏未授权的行数据。此方案是实时领域Flink的解决方案&#xff0c;类似离线数仓Hive中Ranger Row-level Filter方案。 源码地址: https…

数据分片(mycat)

1. 数据分片概念&#xff1a; 1.1. 分库分表 什么是分库分表&#xff1a; 将存放在一台数据库服务器中的数据&#xff0c;按照特定方式&#xff08;指的是程序开发的算法&#xff09;进行拆分&#xff0c;分散存放到多台数据库服务器中&#xff0c;以达到分散单台服务器负载的…

第51篇-某彩网登录参数分析-webpack【2023-02-21】

声明:该专栏涉及的所有案例均为学习使用,严禁用于商业用途和非法用途,否则由此产生的一切后果均与作者无关!如有侵权,请私信联系本人删帖! 文章目录 一、前言二、网站分析一、前言 今天我们看一个webpack的网站 aHR0cHM6Ly8xMGNhaTUwMC5jYy9sb2dpbg==二、网站分析 首先…

网络协议(一)应用层(自定制协议、HTTP协议)

目录 应用层&#xff1a;负责应用程序之间的数据沟通 一、自定制协议&#xff08;私有协议&#xff09; 二、HTTP协议 1&#xff09;、请求行解析&#xff1a;GET /index.html HTTP/1.1 第一部分&#xff1a;请求方法&#xff1a;多种多样&#xff0c;描述不同的请求目的 …

大数据知识图谱项目——基于知识图谱的医疗知识问答系统(详细讲解及源码)

基于知识图谱的医疗知识问答系统 一、项目概述 本项目基于医疗方面知识的问答&#xff0c;通过搭建一个医疗领域知识图谱&#xff0c;并以该知识图谱完成自动问答与分析服务。本项目以neo4j作为存储&#xff0c;基于传统规则的方式完成了知识问答&#xff0c;并最终以关键词执…

Verilog 学习第五节(串口发送部分)

小梅哥串口部分学习part1 串口通信发送原理串口通信发送的Verilog设计与调试串口发送应用之发送数据串口发送应用之采用状态机实现多字节数据发送串口通信发送原理 1&#xff1a;串口通信模块设计的目的是用来发送数据的&#xff0c;因此需要有一个数据输入端口 2&#xff1a;…

Qt中修改界面类的类名时需要注意的几个修改点

有些时候因为一些原因&#xff0c;需要修改Qt中创建的界面类&#xff0c;需要特别注意几个修改点。 比如将test类修改为test2类 修改test.h名称为test2.h文件&#xff1b;修改test.cpp名称为test2.cpp文件&#xff1b;修改test.ui名称为test2.ui文件&#xff1b;修改pro文件中…

多层感知机的区间随机初始化方法

摘要&#xff1a; 训练是构建神经网络模型的一个关键环节&#xff0c;该过程对网络中的参数不断进行微调&#xff0c;优化模型在训练数据集上的损失函数。参数初始化是训练之前的一个重要步骤&#xff0c;决定了训练过程的起点&#xff0c;对模型训练的收敛速度和收敛结果有重要…

Java基础43 异常(Exception)

异常&#xff08;Exception&#xff09;Exception1.1 异常的概念1.2 异常体系图&#xff08;☆&#xff09;1.3 异常处理分类1.3.1 运行时异常&#xff08;☆&#xff09;1.3.2 编译时异常&#xff08;☆&#xff09;1.4 异常处理&#xff08;☆&#xff09;1.4.1 try-catch异常…

【Git】Git下载安装与使用(一)

目录 1. 前言 1.1 什么是Git 1.2 使用Git能做什么 2. Git概述 2.1 Git简介 2.2 Git下载与安装 3. Git代码托管服务 3.1 常用的Git代码托管服务 3.2 码云代码托管服务 1. 前言 1.1 什么是Git Git是一个分布式版本控制工具&#xff0c;主要用于管理开发过程中的源代码…

Cookies与Session会话技术详解

引言:日常生活中&#xff0c;人和人之间沟通交流&#xff0c;涉及到一个词----会话&#xff0c;软件中一样存在会话&#xff0c;如&#xff1a;网购登录&#xff0c;访问公司OA系统也是不断的会话&#xff0c;软件中如何管理浏览器客户端和服务端之间会话过程中的会话数据呢&am…

盘点四种自动化测试模型实例及优缺点

一&#xff0c;线性测试 1.概念&#xff1a; 通过录制或编写对应应用程序的操作步骤产生的线性脚本。单纯的来模拟用户完整的操作场景。 &#xff08;操作&#xff0c;重复操作&#xff0c;数据&#xff09;都混合在一起。 2.优点&#xff1a; 每个脚本相对独立&#xff0…

【java】java sftp传输 ,java smb传输访问共享文件夹 集成springboot

文章目录java的sftp传输sftp注意事项java smb传输smb注意事项tips: 集成springboot与不集成springboot区别不大&#xff0c;springboot中无非是引入一个maven依赖 加一个Component注解 &#xff0c; 默认是单例&#xff1b; 复制代码前 请先认真看注意事项 java的sftp传输 依赖…

网络安全态势感知研究综述

摘要&#xff1a;随着物联网、云计算和数字化的迅速发展&#xff0c;传统网络安全防护技术无法应对复杂的网络威胁。网络安全态势感知能够全面的对网络中各种活动进行辨识、理解和预测。首先分别对态势感知和网络安全态势感知的定义进行了归纳整理&#xff0c;介绍了网络安全态…

从0探索NLP——导航帖

从0探索NLP——导航帖 人工智能是一个定义宽泛、知识组成复杂的领域&#xff0c;而NLP是人工智能领域中的一类任务&#xff0c;他在哪呢&#xff1f;Emmmmm~不能说都有涉猎只能说全都都沾点&#xff1a; 每次想要针对NLP的某一点进行讲解时&#xff0c;不讲那写细枝末节&…

全链路压力测试

压力测试的目标&#xff1a; 探索线上系统流量承载极限&#xff0c;保障线上系统具备抗压能力 复制代码 如何做全链路压力测试&#xff1a; 全链路压力测试&#xff1a;整体步骤 容量洪峰 -》 容量评估 -》 问题发现 -》 容量规划 全链路压力测试&#xff1a;细化过程 整体目…

YOLOv6-3.0-目标检测论文解读

文章目录摘要算法2.1网络设计2.2Anchor辅助训练2.3自蒸馏实验消融实验结论论文&#xff1a; 《YOLOv6 v3.0: A Full-Scale Reloading 》github&#xff1a; https://github.com/meituan/YOLOv6上版本参考 YOLOv6摘要 YOLOv6 v3.0中YOLOv6-N达到37.5AP&#xff0c;1187FPS&…

linux下安装minio

获取 MinIO 下载 URL:访问&#xff1a;https://docs.min.io/ 一&#xff0c;进入/opt 目录&#xff0c;创建minio文件夹 cd /optmkdir minio二&#xff0c;wget下载安装包 wget https://dl.minio.io/server/minio/release/linux-amd64/minio三&#xff0c;进入minio文件夹创建…