添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
深情的可乐  ·  Android ...·  1 年前    · 
豪气的苦瓜  ·  [Android]HttpClient和Ht ...·  2 年前    · 

Centos7:使用history中显示命令执行的时间及IP

在centos中可以用history命令查看某个用户过的命令,比如在root用户执行history命令,可以看到下面的结果:[root@dbserver ~]# history|head -3 1 ip add 2 ping www.sina.com.cn 3 cd /etc/system-config 在默认的情况下,只能看到执行的命令,如果能看到命令执行的时间及IP地址,有时会对我们排查故障、诊断问题有很大的帮助,也可以用于操作审计。1 history显示命令执行时间 bash可以通过设置环境变量HISTTIMEFORMAT可以控制history命令的显示方式,要使这个变量对所有的用户生效,可以将变量的设置放在/etc/bashrc文件下。编辑/etc/bashrc文件,加入下面的内容:export HISTTIMEFORMAT="%F %T "%T显示时间,同%F一起使用显示时间的全格式,如果只使用%T则显示的时间不包括日期,运行一下脚本文件使其生效后,在运行history命令,输出有了命令运行的时间:[root@dbserver ~]# source /etc/bashrc [root@dbserver ~]# history |head -2 1 2023-03-09 13:47:16 ip add 2 2023-03-09 13:47:16 ping www.sina.com.cn2 在命令历史文件名中嵌入ip信息 要使history记录命令执行的ip则比想象的要复杂,常见的在HISTTIMEFORMAT环境变量中嵌入who命令来获取ip,实际显示的是运行history命令的会话的ip地址,不是历史记录中的命令的实际执行者的ip。要想获得命令的执行者的ip,有一个办法是设置HISTFILE环境变量,将IP作为命令历史文件的一部分。除了用who命令获得ip信息外,现在大部分系统都是使用ssh远程连接,环境变量SSH_CLIENT里面含有IP地址信息,把这个里面的ip地址取出来也可以,方法如下:arr=($SSH_CLIENT) ssh_ip=${arr[0]} export ssh_ip [root@dbserver ~]# echo $ssh_ip 192.168.56.1 SSH_CLIENT是以空格分隔的字符串,将它数组化后,取得数组的第一个元素就是ip地址,有了IP地址,将它作为命令历史文件名称的一部分设置HISTFILE环境变量export HISTFILE=$ssh_ip$RANDOM".history" 加入$RANDOM变量的目的是防止一个ip多次使用一个用户账号登录,覆盖掉以前的文件。这时后运行几个命令退出会话再重新登录后就可以看到命令的历史记录文件了[root@dbserver ~]# ls -l total 2883740 -rw-------. 1 root root 112 Mar 9 16:45 192.168.56.17946.hisroty/etc/bashrc中ip地址记录相关的部分如下:arr=($SSH_CLIENT) ssh_ip=${arr[0]} export ssh_ip export HISTFILE=$ssh_ip$RANDOM".history"

Oracle: count STOPKEY 优化

在Oracle数据库的日常运维中,经常发现这样一些语句,这些语句对一张大表(数据量由几百万行至几千万行)执行含有几个条件的查询,然后返回一行或者是几行,这些语句的最简单的形式像下面这个样子: select * from test where id< 1000 and rownum=1; 这类语句在awr报告中也偶尔可以看到,它们的逻辑读非常大,返回的行数却不多。这种语句需要优化吗,需要怎么优化,怎么查看这类语句是否存在性能瓶颈。下面通过一个简单的例子说明这类语句怎么查看性能以及如何优化。首先第一步是创建一张测试表,使用下面的语句:create table model_tab as ( select * from dual model dimension by (0 d) measures (0 rnum) rules iterate(1000000) (rnum[ITERATION_NUMBER] = ITERATION_NUMBER+1)); 上面的语句使用的是select 的model语句,model语句定义了一个一维的立方体,这个数组的维度值是Oracle数据库dual表的d列,值是循环的次数加1,总共循环1000000次,创建表的数据是1000000行。 运行下面的语句查询表,以rnum为条件查询一下这张表,语句如下:select * from model_tab where rnum<=1000 and rownum=1;在没有索引的情况下,语句的执行计划如下:PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 2500570731 -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 10 | 3 (0)| 00:00:01 | |* 1 | COUNT STOPKEY | | | | | | |* 2 | TABLE ACCESS FULL| MODEL_TAB | 3 | 30 | 3 (0)| 00:00:01 | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------- 1 - filter(ROWNUM=1) 2 - filter("RNUM"<=1000) 从上面的执行计划可以看出,这条语句执行了全表扫描,总共扫描了三行数据,使用了count stopkey操作。在model_tab表的rnum列上创建索引 create index idx_rnum on model_tab(rnum); 索引创建后,再查看语句的执行计划,变成下面这个-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Plan hash value: 2837899601 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 10 | 4 (0)| 00:00:01 | |* 1 | COUNT STOPKEY | | | | | | | 2 | TABLE ACCESS BY INDEX ROWID BATCHED| MODEL_TAB | 3 | 30 | 4 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | IDX_RNUM | 1000 | | 3 (0)| 00:00:01 | -------------------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(ROWNUM=1) 3 - access("RNUM"<=1000) 优化器使用了刚才创建的索引,执行了范围扫描,扫描了1000行数据,可见Oracle优化器认为这个计划比原来的全表扫描性能更好,可是,比较这两个执行计划,从返回行数及成本来看,没有明显的差别,甚至使用索引的范围扫描扫描的行数更多。但是,Oracle优化器为什么会选择使用索引,这里优化的选择正确吗,看一下这条语句再有无索引的情况下实际执行的统计信息,没有索引的情况下,统计信息如下: Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size 621 bytes sent via SQL*Net to client 415 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed创建索引后,统计信息如下: 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size 623 bytes sent via SQL*Net to client 415 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed 还是没有明显的差别,都是4个一致性读,难道有没有索引这条语句的执行效率相差无几,改一下语句里rnum的条件看一看select * from model_tab where rnum>=100000 and rownum=1; rnum的条件改成大于10万,再看一下有无索引情况下语句执行的状态统计,在没有索引的情况下:Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 208 consistent gets 0 physical reads 0 redo size 622 bytes sent via SQL*Net to client 417 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed在有索引的情况下Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 4 consistent gets 0 physical reads 0 redo size 626 bytes sent via SQL*Net to client 417 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed 从上面的语句执行的统计信息来看,在有索引的情况下,逻辑读(4个)明显少于无索引的情况下的逻辑读(208)个,有着明显的优势。 从上面的比较来看,在没有索引,执行全表情况下,语句执行的效率和执行条件有关系,这时应为count stopkey操作时,在找到符合条件的数据时,就会停止全表扫描,返回数据,语句的执行效率有查找的条件有关系,也同数据分布有关系,执行效率播放较大。而在有索引的情况下,执行范围扫描,扫描到符合条件的数据后也会停止扫描,返回数据,则语句的执行效率相对稳定,这时由btree索引的性质决定的。如果在数据库的日常运维和awr报告中发现此类语句单词执行逻辑读较多或占用cpu时间过长,就要检查是否已经为此类语句建立有效的索引,因为如果执行的是范围索引,此类语句的逻辑读应该较小,尤其是在查询的条件有多个时,创建合适的符合索引更是十分必要,对语句的性能提高也更大,这时因为在多个条件下,很可能会在扫描完全表才能发现符合条件的数据。

Oracle 会话超时设置3:在用户profile文件中设置

Oracle数据库第三种会话超时管理的办法是在用户的profile文件中设置超时管理,这种方法有一个其它两种办法没有的好处是可以针对每个用户设置单独的超时参数,具有更好的灵活性。1 用户profile文件 创建用户时可以为用户指定一个profile文件,在profile文件里可以设置用户的密码策略和资源限制等,其中就包括会话的空闲超时和会话的连接时间限制。通常我们在创建用户时不会指定profile文件,这是用户适用的就是数据库默认的profile文件,名字就叫做DEFAULT。 profile的资源限制是否生效由一个实例参数决定,这个实例参数的名字是resource_limit,这个参数的默认值如下:SQL> select NAME,VALUE,DEFAULT_VALUE,ISSYS_MODIFIABLE, DESCRIPTION from v$parameter where name='resource_limit'; NAME VALUE DEFAULT_VALUE ISSYS_MOD DESCRIPTION ---------------- ----- -------------- --------- ---------------------------------------- resource_limit TRUE TRUE IMMEDIATE master switch for resource limit 可以看到这个参数的默认值是true,更改后可以立即生效。这个参数的设置对用户的资源限制和会话管理有影响,对用户的密码策略则没有影响,无论这个参数的值是什么,用户的密码策略总是生效的。 需要考虑的另一点时,在启用用户的profile资源限制时,会对数据库的性能造成轻微的影响,这是因为在每一次连接数据库时,Oracle都要载入profile文件,执行资源策略。2 在profile文件中执行会话空闲和连接时间限制 在用户的profile文件中,可以设置会话的空闲超时时间。如果会话跳用之间的空闲时间超过设置时间,会话当前的事务会被回滚,会话被终结,会话占用的资源也会被释放。会话的下一个调用会收到指示它不再连接到实例的错误。这里面要注意的是,在会话空闲超时之后,Oracle的pmon后台进程会执行会话清除工作,在pmon进程完成会话清除之前,进程任然会被显示为活跃进程,并且仍然被计算到会话和用户资源限制之内。 在用户的profile文件中,也可以设置会话的连接时间限制,如果会话的连接时间超过了限制时间,会话的当前事务被回滚,会话被丢弃,资源占用资源被释放。 上面的这两个参数的单位都是分钟,有一点需要注意的是,这两个时间都不是精确的,Oracle不会持续监测会话的空闲时间或者是连接时间,不这么做的原因是为了节省资源,以免对性能造成过大的影响。Oracle的做法是每个几分钟进行检查,因此,会话可能会稍微超过一点设置的时间才会被判为超时,这个超过的时间有可能是几分钟。看一下数据库中这两个参数的默认设置:SQL> select PROFILE,RESOURCE_NAME,LIMIT from dba_profiles where PROFILE='DEFAULT'and RESOURCE_NAME in ('IDLE_TIME', 'CONNECT_TIME') PROFILE RESOURCE_NAME LIMIT -------------------------------- -------------------------------- -------------------------------- DEFAULT IDLE_TIME UNLIMITED DEFAULT CONNECT_TIME UNLIMITED 在数据库默认的DEFAULT profile文件中,这两个参数的默认值都是unlimited,即没有限制。3 实验验证 先验证一下空闲超时设置,直接更改DEFAULT profile文件中的这个参数值,在profile文件里更改的值只对更改之后的新建的会话有效,对已经登录的会话是无效的。 SQL> alter profile DEFAULT limit IDLE_TIME 3; Profile altered. 更改之后新建一个会话,不执行任何操作,大约3分钟后会话,会话被终结,数据库告警日志里出现了下面的信息。 2023-02-17T15:00:12.939943+08:00 KILL SESSION for sid=(110, 33842): Reason = profile limit idle_time Mode = KILL SOFT -/-/NO_REPLAY Requestor = PMON (orapid = 2, ospid = 1672, inst = 1) Owner = Process: USER (orapid = 47, ospid = 3895) Result = ORA-0 告警日志里显示被终结会话的sid和serial#,会话被kill的原因,请求kill这个会话的进程是pmon。 验证完之后恢复DEFAULT profile 中这个文件的默认设置 alter profile DEFAULT limit IDLE_TIME unlimited; 验证连接超时设置使用单独的profile,为用户创建一个单独的profile,设置会话连接超时时间,先创建一个profile文件SQL> create profile test_connect limit CONNECT_TIME 3; Profile created. 创建用户profile时至少要指定一个参数,其它的参数会采用默认值,如下面看到的SQL> select PROFILE,RESOURCE_NAME,LIMIT from dba_profiles where PROFILE='TEST_CONNECT'; PROFILE RESOURCE_NAME LIMIT -------------------- -------------------------------- -------------------- TEST_CONNECT COMPOSITE_LIMIT DEFAULT TEST_CONNECT SESSIONS_PER_USER DEFAULT TEST_CONNECT CPU_PER_SESSION DEFAULT TEST_CONNECT CPU_PER_CALL DEFAULT TEST_CONNECT LOGICAL_READS_PER_SESSION DEFAULT TEST_CONNECT LOGICAL_READS_PER_CALL DEFAULT TEST_CONNECT IDLE_TIME DEFAULT TEST_CONNECT CONNECT_TIME 3 TEST_CONNECT PRIVATE_SGA DEFAULT TEST_CONNECT FAILED_LOGIN_ATTEMPTS DEFAULT TEST_CONNECT PASSWORD_LIFE_TIME DEFAULT TEST_CONNECT PASSWORD_REUSE_TIME DEFAULT TEST_CONNECT PASSWORD_REUSE_MAX DEFAULT TEST_CONNECT PASSWORD_VERIFY_FUNCTION DEFAULT TEST_CONNECT PASSWORD_LOCK_TIME DEFAULT TEST_CONNECT PASSWORD_GRACE_TIME DEFAULT TEST_CONNECT INACTIVE_ACCOUNT_TIME DEFAULT设置测试用户的profile为这个文件SQL> alter user test profile test_connect; User altered. 开启一个会话SQL> / SYSDATE ------------------- 2023-02-17 15:13:02 SQL> / SYSDATE ------------------- 2023-02-17 15:13:04 SQL> / select sysdate from dual ERROR at line 1: ORA-02399: exceeded maximum connect time, you are being logged off 会话超时后,显示的是超过最大连接时间,会话被注销。

Oracle会话超时设置1:在sqlnet.ora和listener.ora中设置

Oracle数据库会话管理中一个重要的内容是会话超时管理,会话超时管理由涉及到会话创建超时,会话空闲超时、会话故障超时,会话超时的配置也比较复杂,可以在Oracle net services中设置,也可以设置实例级别的超时和用户级别的超时,这篇文章先介绍在Oracle net services中的会话超时设置。1 Oracle net services简介 Oracle net services是分布式异构环境下的企业级连接方案,可以简化网络配置和管理的复杂性,网络性能最大化,提高网络诊断能力。Oracle net services由三个主要的组件,Oracle net,Oracle listener和Oracle 连接管理器。 Oracle net 负责建立和维护客户应用程序与数据库服务器之间的连接,在它们之间交换信息。Oracle net 部署于网络中的每一台计算机上。Oracle net在系统中位置和作用可以从下面的官网上图中看到。 Oracle listener,将客户连接请求转交给服务器,在连接建立后,客户端和Oracle 服务器直接通信。其在架构中的位置和功能可以从另一张官网中的图中看到。 监听只在连接建立的过程中起作用,因而在监听的日志中,可以看到大量连接建立的信息,包括连接成功或者是异常的信息,而看不到连接建立后(如连接超时,连接异常)的信息。 Oracle连接管理器不是必备组件,它同客户端或者服务器分开,位于自己的计算机上。为数据库代理或者屏蔽请求。2 在sqlnet.ora中设置会话超时 2.1 SQLNET.EXPIRE_TIME 这个参数的单位是分钟,指定一个事件间隔。默认值是0,设置为大于0的值之后,Oracle在指定间隔发送一个check消息验证客户端/服务器连接依然活跃。如果探测到一个终结的或者是不再使用的连接,会返回一个错误,导致服务进程退出。主要用于确保客户端异常终结之后,连接不会无限期处于open状态。这个参数主要用于数据库服务器端。这个参数设置为大于0的值后,会额外产生网络业务,尽管非常小,也可能影响网络性能。 2.2 SQLNET.RECV_TIMEOUT和SQLNET.SEND_TIMEOUT SQLNET.RECV_TIMEOUT这个参数的单位是秒,设置数据库服务器在客户端建立连接之后等待客户端的时间。客户端建立连接后必须在这个指定间隔内发送一些数据,在客户端时常关闭或者异常的环境下,推荐设置这个参数。如果客户端在这个时间间隔内没有发送任何数据,数据库服务器在sqlnet.log中记录一条ORA-12535: TNS:operation timed out或者 ORA-12609: TNS: Receive timeout occurred消息。没有这个参数,数据库服务器可能会永久等待异常或关闭的客户。这个参数缺省没有设置值。这个参数也可以用户客户端,设置客户端等待服务器发送数据的时间间隔。 SQLNET.SEND_TIMEOUT这个参数的单位也是秒,设置的是数据库在客户端建立连接后完成一个发送操作的时间。也是推荐在客户端时常关闭或者异常的环境下设置,如果没有这个参数,数据库服务器的发送操作可能在客户端关闭或异常时一直等待。也可以用于客户端。 2.3 SQLNET.INBOUND_CONNECT_TIMEOUT和SQLNET.OUTBOUND_CONNECT_TIMEOUT 这两个参数设置连接创建时的超时设置,SQLNET.INBOUND_CONNECT_TIMEOUT单位时秒,设置用户创建连接的超时时间,如果用户不能再设置的时间内创建连接,完成认证,数据库服务器终结连接,并且在sqlnet.log文件中记录客户的 IP地址及一条 ORA-12170错误,客户端会收到一条RA-12547: TNS:lost contact 或者一条 ORA-12637: Packet receive failed错误消息。这个参数的缺省值时60. SQLNET.OUTBOUND_CONNECT_TIMEOUT的单位也是秒,设置了这个超时时间后,用于客户建立到服务器的连接时,如果在指定的时间内没有建立连接,终止连接试图,客户收到一条ORA-12170: TNS:Connect timeout occurred错误。如果没有这个参数,当数据库服务器所在的主机不可达时,到数据库服务器的客户连接请求可能会被阻塞TCP connect timeout 定义的时间间隔,缺省为60秒。 2.4 TCP.CONNECT_TIMEOUT 这个参数的单位时秒,缺省值是60,设置客户建立至数据库服务器的tcp连接的超时时间。如果在这个指定的时间间隔内没有创建tcp连接,连接企图被终止,客户收到RA-12170: TNS:Connect timeout occurred错误。这个值是SQLNET.OUTBOUND_CONNECT_TIMEOUT的子集,SQLNET.OUTBOUND_CONNECT_TIMEOUT是指客户端建立到数据库实例连接的时间。3 listener.ora中的超时设置 INBOUND_CONNECT_TIMEOUT_listener_name参数设置在网络连接建立后,客户完成至监听器连接请求的时间。如果在这个指定时间内服务器没有收到客户的连接请求,终结连接并在监听日志内记录客户端的IP address 和一条ORA-12525:TNS: listener has not received client's request in time allowed 错误。这个参数的缺省值是60秒。4 默认设置下的连接超时 如果不做任何设置,Oracle中的超时设置应该是这样的,INBOUND_CONNECT_TIMEOUT_listener_name是60秒,SQLNET.INBOUND_CONNECT_TIMEOUT的值是60秒,TCP.CONNECT_TIMEOUT是60秒,其它的都没有值或者为0,也就是不生效。在客户创建到数据库服务器的连接过程中如果超时或者异常会有报错,连接建立后则没有任何超时机制,

Oracle 19C RPM安装及创建非容器数据库

1 安装oracle-database-preinstall包 使用rpm安装Oracle 19c 之前需要安装oracle-database-preinstall包,这个包完成了以前安装时需要手动执行的准备工作,包括以下几项: 自动下载安装grid和数据库安装需要的软件包。 创建oracle用户和oinstall、dba组 设置sysctl.conf、系统启动参数和驱动器参数 设置软硬资源限制 根据内核版本设置其它参数 Linux x86_64 机器上再核心中设置numa=off 在安装这个preinstall包时,遇到了依赖错误。[root@localhost ~]# rpm -ivh oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm warning: oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY error: Failed dependencies: compat-libcap1 is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 glibc-devel is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 ksh is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 libaio-devel is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 libstdc++-devel is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 nfs-utils is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 psmisc is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 xorg-x11-utils is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 xorg-x11-xauth is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64 安装以下上面列出的依赖包[root@localhost ~]# yum install compat-libcap1 glibc-devel libaio-devel libstdc++-devel nfs-utils psmisc xorg-x11-utils xorg-x11-xauth Installed: compat-libcap1.x86_64 0:1.10-7.el7 glibc-devel.x86_64 0:2.17-326.el7_9 libaio-devel.x86_64 0:0.3.109-13.el7 libstdc++-devel.x86_64 0:4.8.5-44.el7 nfs-utils.x86_64 1:1.3.0-0.68.el7.2 psmisc.x86_64 0:22.20-17.el7 xorg-x11-utils.x86_64 0:7.5-23.el7 xorg-x11-xauth.x86_64 1:1.0.9-1.el7 Dependency Installed: glibc-headers.x86_64 0:2.17-326.el7_9 gssproxy.x86_64 0:0.7.0-30.el7_9 kernel-headers.x86_64 0:3.10.0-1160.83.1.el7 keyutils.x86_64 0:1.5.8-3.el7 libICE.x86_64 0:1.0.9-9.el7 libSM.x86_64 0:1.2.2-2.el7 libX11.x86_64 0:1.6.7-4.el7_9 libX11-common.noarch 0:1.6.7-4.el7_9 libXau.x86_64 0:1.0.8-2.1.el7 libXext.x86_64 0:1.3.3-3.el7 libXi.x86_64 0:1.7.9-1.el7 libXinerama.x86_64 0:1.1.3-2.1.el7 libXmu.x86_64 0:1.1.2-2.el7 libXrandr.x86_64 0:1.5.1-2.el7 libXrender.x86_64 0:0.9.10-1.el7 libXt.x86_64 0:1.1.5-3.el7 libXtst.x86_64 0:1.2.3-1.el7 libXv.x86_64 0:1.0.11-1.el7 libXxf86dga.x86_64 0:1.1.4-2.1.el7 libXxf86misc.x86_64 0:1.0.3-7.1.el7 libXxf86vm.x86_64 0:1.1.4-1.el7 libbasicobjects.x86_64 0:0.1.1-32.el7 libcollection.x86_64 0:0.7.0-32.el7 libdmx.x86_64 0:1.1.3-3.el7 libevent.x86_64 0:2.0.21-4.el7 libini_config.x86_64 0:1.3.1-32.el7 libnfsidmap.x86_64 0:0.25-19.el7 libpath_utils.x86_64 0:0.2.1-32.el7 libref_array.x86_64 0:0.1.5-32.el7 libverto-libevent.x86_64 0:0.2.5-4.el7 libxcb.x86_64 0:1.13-1.el7 Dependency Updated: glibc.x86_64 0:2.17-326.el7_9 glibc-common.x86_64 0:2.17-326.el7_9 Complete!再次安装preinstall包[root@localhost ~]# rpm -ivh oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm warning: oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY error: Failed dependencies: ksh is needed by oracle-database-preinstall-19c-1.0-3.el7.x86_64ksh这个依赖包可以不装的,装一下也可以[root@localhost ~]# yum install ksh Installed: ksh.x86_64 0:20120801-144.el7_9 Complete!再次安装[root@localhost ~]# rpm -ivh oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm warning: oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ################################# [100%] Updating / installing... 1:oracle-database-preinstall-19c-1.################################# [100%]检查以下包安装以后的效果[root@localhost ~]# sysctl -p fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 net.ipv4.conf.all.rp_filter = 2 net.ipv4.conf.default.rp_filter = 2 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500sysctl参数已经更改并且生效,看一下sysctl.conf文件的内容[root@localhost etc]# cat sysctl.conf # sysctl settings are defined through files in # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/. # Vendors settings live in /usr/lib/sysctl.d/. # To override a whole file, create a new file with the same in # /etc/sysctl.d/ and put new settings there. To override # only specific settings, add a file with a lexically later # name in /etc/sysctl.d/ and put new settings there. # For more information, see sysctl.conf(5) and sysctl.d(5). # oracle-database-preinstall-19c setting for fs.file-max is 6815744 fs.file-max = 6815744 # oracle-database-preinstall-19c setting for kernel.sem is '250 32000 100 128' kernel.sem = 250 32000 100 128 # oracle-database-preinstall-19c setting for kernel.shmmni is 4096 kernel.shmmni = 4096 # oracle-database-preinstall-19c setting for kernel.shmall is 1073741824 on x86_64 kernel.shmall = 1073741824 # oracle-database-preinstall-19c setting for kernel.shmmax is 4398046511104 on x86_64 kernel.shmmax = 4398046511104 # oracle-database-preinstall-19c setting for kernel.panic_on_oops is 1 per Orabug 19212317 kernel.panic_on_oops = 1 # oracle-database-preinstall-19c setting for net.core.rmem_default is 262144 net.core.rmem_default = 262144 # oracle-database-preinstall-19c setting for net.core.rmem_max is 4194304 net.core.rmem_max = 4194304 # oracle-database-preinstall-19c setting for net.core.wmem_default is 262144 net.core.wmem_default = 262144 # oracle-database-preinstall-19c setting for net.core.wmem_max is 1048576 net.core.wmem_max = 1048576 # oracle-database-preinstall-19c setting for net.ipv4.conf.all.rp_filter is 2 net.ipv4.conf.all.rp_filter = 2 # oracle-database-preinstall-19c setting for net.ipv4.conf.default.rp_filter is 2 net.ipv4.conf.default.rp_filter = 2 # oracle-database-preinstall-19c setting for fs.aio-max-nr is 1048576 fs.aio-max-nr = 1048576 # oracle-database-preinstall-19c setting for net.ipv4.ip_local_port_range is 9000 65500 net.ipv4.ip_local_port_range = 9000 65500oracle-database-preinstall-19c设置的值都有注释2 使用rpm包安装数据库[root@localhost ~]# rpm -ivh oracle-database-ee-19c-1.0-1.x86_64.rpm warning: oracle-database-ee-19c-1.0-1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY Preparing... ################################# [100%] Updating / installing... 1:oracle-database-ee-19c-1.0-1 ################################# [100%] [INFO] Executing post installation scripts... [INFO] Oracle home installed successfully and ready to be configured. To configure a sample Oracle Database you can execute the following service configuration script as root: /etc/init.d/oracledb_ORCLCDB-19c configure安装后提示创建实例数据库的命令,运行这条命令可以创建一个示例数据库[root@localhost init.d]# pwd /etc/init.d [root@localhost init.d]# ./oracledb_ORCLCDB-19c configure Configuring Oracle Database ORCLCDB. Prepare for db operation 8% complete Copying database files 31% complete Creating and starting Oracle instance 32% complete 36% complete 40% complete 43% complete 46% complete Completing Database Creation 51% complete 54% complete Creating Pluggable Databases 58% complete 77% complete Executing Post Configuration Actions 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/ORCLCDB. Database Information: Global Database Name:ORCLCDB System Identifier(SID):ORCLCDB Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details. Database configuration completed successfully. The passwords were auto generated, you must change them by connecting to the database using 'sqlplus / as sysdba' as the oracle user.3 登录创建的示例数据库登陆前需要设置以下oracle的环境变量,主要是下面三个[oracle@localhost ~]$ export ORACLE_SID=ORCLCDB [oracle@localhost ~]$ export ORACLE_HOME=/opt/oracle/product/19c/dbhome_1 ##这里不能加反斜线 [oracle@localhost ~]$ export PATH=$PATH:$ORACLE_HOME/bin设置完之后就可以登录数据库了[oracle@localhost dbhome_1]$ sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 27 22:06:13 2023 Version 19.3.0.0.0 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 ORCLPDB1 READ WRITE NO可以看出,这个示例数据库是容器数据库。4 创建非容器数据库 Oracle 19c rpm安装后配置的示例数据库是容器数据库,大部分系统中使用的还是以前的非容器数据库。在安装完rpm包后,可以使用dbca根据自己的需要创建数据库了,创建之前,先删除已经创建的示例数据库。[root@localhost init.d]# ./oracledb_ORCLCDB-19c delete Detecting existing Listeners... Deleting Oracle Listener.... Detecting existing Oracle Databases... Deleting Oracle Database ORCLCDB. [WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed. Prepare for db operation 32% complete Connecting to database 35% complete 39% complete 42% complete 45% complete 48% complete 52% complete 65% complete Updating network configuration files 68% complete Deleting instance and datafiles 84% complete 100% complete Database deletion completed. Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB0.log" for further details. 编辑一个响应文件,响应文件里大部分参数都有默认值,这里使用的响应文件如下[oracle@dbserver ~]$ cat dbca.rsp ############################################################################## ## ## ## DBCA response file ## ## ------------------ ## ## ## ## Specify values for the variables listed below to customize ## ## your installation. ## ## ## ## Each variable is associated with a comment. The comment ## ## can help to populate the variables with the appropriate ## ## values. ## ## ## ## IMPORTANT NOTE: This file contains plain text passwords and ## ## should be secured to have read permission only by oracle user ## ## or db administrator who owns this installation. ## ############################################################################## #------------------------------------------------------------------------------- # Do not change the following system generated value. #------------------------------------------------------------------------------- responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0 #----------------------------------------------------------------------------- # Name : gdbName # Datatype : String # Description : Global database name of the database # Valid values : <db_name>.<db_domain> - when database domain isn't NULL # <db_name> - when database domain is NULL # Default value : None # Mandatory : Yes #----------------------------------------------------------------------------- gdbName=ORCL templateName=/opt/oracle/product/19c/dbhome_1/assistants/dbca/templates/General_Purpose.dbc sysPassword=system123 systemPassword=system123 datafileDestination= characterSet=UTF8 使用dbca创建数据库[oracle@dbserver ~]$ dbca -silent -createDatabase -responseFile /home/oracle/dbca.rsp [WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. [WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. Prepare for db operation 10% complete Copying database files 40% complete Creating and starting Oracle instance 42% complete 46% complete 50% complete 54% complete 60% complete Completing Database Creation 66% complete 69% complete 70% complete Executing Post Configuration Actions 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/ORCL. Database Information: Global Database Name:ORCL System Identifier(SID):ORCL Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCL/ORCL.log" for further details.数据库创建成功后,设置以下oracle用户的环境变量就可以登录了。

Postgresql15的安装及简单使用

1 下载、安装及数据库初始化 下载安装postgresql官方yam仓库,使用yum install命令完成 [root@localhost ~]yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm 下载安装postgresql15-server[root@localhost ~]yum install -y postgresql15-server Error: Package: postgresql15-server-15.1-1PGDG.rhel7.x86_64 (pgdg15) Requires: libzstd.so.1()(64bit) Error: Package: postgresql15-15.1-1PGDG.rhel7.x86_64 (pgdg15) Requires: libzstd.so.1()(64bit) Error: Package: postgresql15-15.1-1PGDG.rhel7.x86_64 (pgdg15) Requires: libzstd >= 1.4.0 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest 有一个依赖包(libzstd)需要手动安装安装以下,版本不能低于1.4.0[root@localhost ~]# wget https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/l/libzstd-1.5.2-1.el7.x86_64.rpm [root@localhost ~]# rpm -ivh libzstd-1.5.2-1.el7.x86_64.rpm 再次安装postgresql15-server后,进行数据库初始化。[root@localhost ~]/usr/pgsql-15/bin/postgresql-15-setup initdb设置数据库开机自动启动后启动数据库[root@localhost ~]systemctl enable postgresql-15 [root@localhost ~]systemctl start postgresql-152 登录数据库及数据库的简单使用检查postgresql数据库的操作系统用户[root@localhost ~]# cat /etc/passwd postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql:/bin/bash切换至数据库用户[root@localhost ~]# su - postgres -bash-4.2$登录数据库-bash-4.2$ psql psql (15.1) Type "help" for help.显示现有数据库 postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges -----------+----------+----------+-------------+-------------+------------+-----------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres + | | | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres + | | | | | | | postgres=CTc/postgres (3 rows)退出到操作系统下,查看postgresql的后台进程[root@localhost ~]# top -b -u postgres -d 1 -n 1 -c top - 03:17:36 up 54 min, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 116 total, 2 running, 114 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 12028356 total, 10859232 free, 481980 used, 687144 buff/cache KiB Swap: 6160380 total, 6160380 free, 0 used. 11264344 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2070 postgres 20 0 401252 17368 15920 S 0.0 0.1 0:00.04 /usr/pgsql-15/bin/postmaster -D /var/lib/pgsql/15/d+ 2072 postgres 20 0 253084 2148 720 S 0.0 0.0 0:00.00 postgres: logger 2073 postgres 20 0 401404 2316 812 S 0.0 0.0 0:00.00 postgres: checkpointer 2074 postgres 20 0 401388 3368 1872 S 0.0 0.0 0:00.37 postgres: background writer 2076 postgres 20 0 401388 6272 4780 S 0.0 0.1 0:00.08 postgres: walwriter 2077 postgres 20 0 402872 3356 1588 S 0.0 0.0 0:00.00 postgres: autovacuum launcher 2078 postgres 20 0 402852 3088 1356 S 0.0 0.0 0:00.00 postgres: logical replication launcher [root@localhost ~]# ps -fu postgres UID PID PPID C STIME TTY TIME CMD postgres 1010 1 0 01:02 ? 00:00:00 /usr/pgsql-15/bin/postmaster -D /var/lib/pgsql/15/data/ postgres 1067 1010 0 01:02 ? 00:00:00 postgres: logger postgres 1076 1010 0 01:02 ? 00:00:00 postgres: checkpointer postgres 1077 1010 0 01:02 ? 00:00:00 postgres: background writer postgres 1089 1010 0 01:02 ? 00:00:00 postgres: walwriter postgres 1090 1010 0 01:02 ? 00:00:00 postgres: autovacuum launcher postgres 1091 1010 0 01:02 ? 00:00:00 postgres: logical replication launcher看一下数据库状态-bash-4.2$ export PATH=$PATH:/usr/pgsql-15/bin -bash-4.2$ pg_ctl status pg_ctl: server is running (PID: 1010) /usr/pgsql-15/bin/postgres "-D" "/var/lib/pgsql/15/data/"重新创建数据库,连接数据库,创建表postgres=# create database test; CREATE DATABASE postgres=# \c test You are now connected to database "test" as user "postgres". test=# create table test (id int, name varchar(20)); CREATE TABLE test=# insert into test values (1, 'test'); INSERT 0 1 test=# select * from test; id | name ----+------ 1 | test (1 row)远程连接数据库,首先创建用户,授予权限 CREATE ROLE postgres=# alter database test owner to test; ALTER DATABASE test=# alter table test owner to test; ALTER TABLE 这里直接把test数据库和test表的owner更换为新建的用户,由于数据库的owner是在创建表之后改的,表的owner也需要改一下。 编辑一下数据库data目录里的pg_hba.conf.conf,加入下面一行host test all 192.168.0.0/16 md5第三列的all允许用户连接所有的数据库,这样test用户就可以从192.168网段的任一地址连接到数据库了

OceanBase4.0:使用grafana监控oceanbase

下载oceanbase4.0 all in one并安装之后,grafana、prometheus及obagent都已经安装了,可以使用obd工具部署并启动grafana对现有的oceanbase进行监控,部署的步骤十分简单,只需要编辑一个配置文件,然后使用obd cluster命令部署并启动即可。配置文件可以使用obd的示例文件里的all-components-min.yaml,在这个文件的基础上根据ob数据库和服务器的实际情况稍微更改一下即可。 1 拷贝并修改配置文件[root@my_ob example]# pwd /usr/local/oceanbase-all-in-one/obd/usr/obd/example obd的配置文件示例在上面的目录下,拷贝这个目录下面的all-components-min.yaml至当前用户的home目录下,编辑一下这个文件,删除里面除obagent、grafana、prometheus之外的部分,根据实际情况调整文件里内容,这次部署使用的配置文件如下: [root@localhost ~]# cat all-components-min.yaml ## Only need to configure when remote login is required # user: # username: your username # password: your password if need # key_file: your ssh-key file path if need # port: your ssh port, default 22 # timeout: ssh connection timeout (second), default 30 obagent: depends: - oceanbase-ce servers: - name: server1 # Please don't use hostname, only IP can be supported ip: 127.0.0.1 global: home_path: /root/obagent ob_monitor_status: active prometheus: depends: - obagent servers: - 192.168.56.101 global: home_path: /root/prometheus grafana: depends: - prometheus servers: - 192.168.56.101 global: home_path: /root/grafana login_password: oceanbase 由于是本地安装,不需要填入用户名和密码,数据库用的是本地loopback ip地址,Prometheus和grafana由于要从服务器外边访问,不能使用loopback地址,使用服务器的本地ip地址。2 部署grafana 有了配置文件,就可以使用这个文件进行部署了。[root@localhost ~]# obd cluster deploy gra -c all-components-min.yaml Package obagent-1.2.0-4.el7 is available. Package prometheus-2.37.1-10000102022110211.el7 is available. Package grafana-7.5.17-1 is available. install obagent-1.2.0 for local ok install prometheus-2.37.1 for local ok install grafana-7.5.17 for local ok +-----------------------------------------------------------------------------------------+ | Packages | +------------+---------+-----------------------+------------------------------------------+ | Repository | Version | Release | Md5 | +------------+---------+-----------------------+------------------------------------------+ | obagent | 1.2.0 | 4.el7 | 0e8f5ee68c337ea28514c9f3f820ea546227fa7e | | prometheus | 2.37.1 | 10000102022110211.el7 | 58913c7606f05feb01bc1c6410346e5fc31cf263 | | grafana | 7.5.17 | 1 | 1bf1f338d3a3445d8599dc6902e7aeed4de4e0d6 | +------------+---------+-----------------------+------------------------------------------+ Repository integrity check ok Parameter check ok Open ssh connection x [ERROR] root@192.168.56.101 connect failed: No authentication methods available 部署过程中报错了,检查报错信息,是在ssh连接192.168.56.101地址时出现错误,没有验证方法可用,这里需要设置一下至192.168.56.101的免ssh登录,ob的官方文档里与设置方法,十分简单,直接拿来使用。 先创建一个密钥[root@localhost ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:OEcvGW+vAdQfJMC0gYVAm1N6dBL1ATGNEJS0Kd3PAMc root@localhost.localdomain The key's randomart image is: +---[RSA 2048]----+ | .o+@&%*o . | | B+OE*.+ | | * = B o . | | + + O . . | | o S * . | | o + . | | . . | | o | | . | +----[SHA256]-----+上面的passphrase直接回车即可。创建完密钥后将密钥拷贝至目标服务器[root@localhost ~]# ssh-copy-id root@192.168.56.101 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host '192.168.56.101 (192.168.56.101)' can't be established. ECDSA key fingerprint is SHA256:JEQIKavuwHdQrHfuS+xMNdMjZxAms6UQypFH83dD4yk. ECDSA key fingerprint is MD5:8e:2d:c4:1f:66:2f:9c:e6:16:fb:b6:35:76:c3:1d:b2. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@192.168.56.101's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@192.168.56.101'" and check to make sure that only the key(s) you wanted were added.再进行部署,[root@localhost ~]# obd cluster deploy gra -c all-components-min.yaml install obagent-1.2.0 for local ok install prometheus-2.37.1 for local ok install grafana-7.5.17 for local ok +-----------------------------------------------------------------------------------------+ | Packages | +------------+---------+-----------------------+------------------------------------------+ | Repository | Version | Release | Md5 | +------------+---------+-----------------------+------------------------------------------+ | obagent | 1.2.0 | 4.el7 | 0e8f5ee68c337ea28514c9f3f820ea546227fa7e | | prometheus | 2.37.1 | 10000102022110211.el7 | 58913c7606f05feb01bc1c6410346e5fc31cf263 | | grafana | 7.5.17 | 1 | 1bf1f338d3a3445d8599dc6902e7aeed4de4e0d6 | +------------+---------+-----------------------+------------------------------------------+ Repository integrity check ok Parameter check ok Open ssh connection ok Cluster status check ok Initializes obagent work home ok Initializes prometheus work home ok Initializes grafana work home ok Remote obagent-1.2.0-4.el7-0e8f5ee68c337ea28514c9f3f820ea546227fa7e repository install ok Remote obagent-1.2.0-4.el7-0e8f5ee68c337ea28514c9f3f820ea546227fa7e repository lib check ok Remote prometheus-2.37.1-10000102022110211.el7-58913c7606f05feb01bc1c6410346e5fc31cf263 repository install ok Remote prometheus-2.37.1-10000102022110211.el7-58913c7606f05feb01bc1c6410346e5fc31cf263 repository lib check ok Remote grafana-7.5.17-1-1bf1f338d3a3445d8599dc6902e7aeed4de4e0d6 repository install ok Remote grafana-7.5.17-1-1bf1f338d3a3445d8599dc6902e7aeed4de4e0d6 repository lib check ok gra deployedgra是部署的名字,可以看到部署成功了。3 启动grafana 部署完之后,需要启动该部署[root@localhost ~]# obd cluster start gra Get local repositories ok Search plugins ok Open ssh connection ok Load cluster param plugin ok Check before start obagent ok Check before start prometheus ok Check before start grafana ok Start obagent ok obagent program health check ok Start promethues ok prometheus program health check ok Connect to Prometheus ok Initialize cluster ok Start grafana ok grafana program health check ok Connect to grafana ok Initialize cluster ok +-----------------------------------------------+ | obagent | +-----------+-------------+------------+--------+ | ip | server_port | pprof_port | status | +-----------+-------------+------------+--------+ | 127.0.0.1 | 8088 | 8089 | active | +-----------+-------------+------------+--------+ +-------------------------------------------------------+ | prometheus | +----------------------------+------+----------+--------+ | url | user | password | status | +----------------------------+------+----------+--------+ | http://192.168.56.101:9090 | | | active | +----------------------------+------+----------+--------+ +---------------------------------------------------------------------+ | grafana | +----------------------------------------+-------+-----------+--------+ | url | user | password | status | +----------------------------------------+-------+-----------+--------+ | http://192.168.56.101:3000/d/oceanbase | admin | oceanbase | active | +----------------------------------------+-------+-----------+--------+ gra running 可以看到启动成功了,命令最后给出了prometheus和grafana的访问信息。这个数据库和granfana是部署再虚拟机上的,从虚拟机外访问却遇到了地址不能访问错误,原因是centos安装时默认打开防火墙服务,检查一下防火墙服务是否打开[root@localhost ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-12-26 20:27:33 EST; 28min ago Docs: man:firewalld(1) Main PID: 695 (firewalld) CGroup: /system.slice/firewalld.service └─695 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid Dec 26 20:27:30 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon... Dec 26 20:27:33 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon. Dec 26 20:27:34 localhost.localdomain firewalld[695]: WARNING: AllowZoneDrifting is enabled. This is considered a...now. Hint: Some lines were ellipsized, use -l to show in full.可以看到,防火墙服务时打开的,临时关闭防火墙服务使用下面命令[root@localhost ~]# systemctl stop firewalld.service如果需要永久关闭防火墙服务,还需要运行下面命令[root@localhost ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. 关闭防火墙服务后,从外边就可以连接到prometheus和grafana了。prometheus的界面如下 grafana 默认的用户是admin,密码是oceanbase,登录后界面是这样的

OceanBase 4.0:使用all in one包安装和启动一个单机集群

1 下载OB 4.0的all in one包 下载地址在这里 https://open.oceanbase.com/softwareCenter/community 打开页面点击第一个OceanBase All in One后弹出下载窗口,下载时注意点击对应CPU架构的链接。2 将下载的包传到服务器上并解压安装 OceanBase 4.0对操作系统及内核的版本都有要求,如果是centos,要求本为7.x,内核3.10.0 版本及以上,检查一下操作系统及内核版本。[root@my_ob ~]# uname -a Linux my_ob 3.10.0-1160.59.1.el7.x86_64 #1 SMP Wed Feb 23 16:47:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux [root@my_ob ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core) 下载的安装包上传到/root目录下,解压到/usr/local目录下[root@my_ob ~]# ls -l oceanbase-all-in-one.4.0.0.0-beta-100120221102135736.el7.x86_64.tar.gz -rw-r--r-- 1 root root 257703079 Dec 26 09:15 oceanbase-all-in-one.4.0.0.0-beta-100120221102135736.el7.x86_64.tar.gz [root@my_ob ~]# tar -xzf oceanbase-all-in-one.4.0.0.0-beta-100120221102135736.el7.x86_64.tar.gz -C /usr/local 切换到/usr/local/oceanbase-all-in-one/bin目录下,运行安装脚本[root@my_ob bin]# ./install.sh name: grafana version: 7.5.17 release:1 arch: x86_64 md5: 1bf1f338d3a3445d8599dc6902e7aeed4de4e0d6 add /usr/local/oceanbase-all-in-one/rpms/grafana-7.5.17-1.el7.x86_64.rpm to local mirror name: obagent version: 1.2.0 release:4.el7 arch: x86_64 md5: 0e8f5ee68c337ea28514c9f3f820ea546227fa7e add /usr/local/oceanbase-all-in-one/rpms/obagent-1.2.0-4.el7.x86_64.rpm to local mirror name: obproxy-ce version: 4.0.0 release:5.el7 arch: x86_64 md5: de53232a951184fad75b15884458d85e31d2f6c3 add /usr/local/oceanbase-all-in-one/rpms/obproxy-ce-4.0.0-5.el7.x86_64.rpm to local mirror name: oceanbase-ce version: 4.0.0.0 release:100000272022110114.el7 arch: x86_64 md5: 42611dc51ca9bb28f36e60e4406ceea4a74914c7 add /usr/local/oceanbase-all-in-one/rpms/oceanbase-ce-4.0.0.0-100000272022110114.el7.x86_64.rpm to local mirror name: oceanbase-ce-libs version: 4.0.0.0 release:100000272022110114.el7 arch: x86_64 md5: 188919f8128394bf9b62e3989220ded05f1d14da add /usr/local/oceanbase-all-in-one/rpms/oceanbase-ce-libs-4.0.0.0-100000272022110114.el7.x86_64.rpm to local mirror name: prometheus version: 2.37.1 release:10000102022110211.el7 arch: x86_64 md5: 58913c7606f05feb01bc1c6410346e5fc31cf263 add /usr/local/oceanbase-all-in-one/rpms/prometheus-2.37.1-10000102022110211.el7.x86_64.rpm to local mirror Disable remote ok ##################################################################### Install Finished ===================================================================== Setup Environment: source ~/.oceanbase-all-in-one/bin/env.sh Quick Start: obd demo More Details: obd -h =====================================================================可以看到脚本不仅安装了ob,obproxy,也安装了prometheus,grafana。3 集群部署和启动 可以使用obd demo命令快速部署和启动ob集群,不过使用这个命令部署需要50G的磁盘空间,我这里root文件系统没有这个多剩余空间,选择使用obd提供的配置文件示例来启动一个最小化的集群。 obd的配置文件示例在/usr/local/oceanbase-all-in-one/obd/usr/obd/example目录下,拷贝目录下的mini-local-example.yaml至当前用户的home目录下,简单编辑一下,可以部署和启动集群了。[root@my_ob ~]# obd cluster deploy lo -c mini-local-example.yaml install oceanbase-ce-4.0.0.0 for local ok +--------------------------------------------------------------------------------------------+ | Packages | +--------------+---------+------------------------+------------------------------------------+ | Repository | Version | Release | Md5 | +--------------+---------+------------------------+------------------------------------------+ | oceanbase-ce | 4.0.0.0 | 100000272022110114.el7 | 42611dc51ca9bb28f36e60e4406ceea4a74914c7 | +--------------+---------+------------------------+------------------------------------------+ Repository integrity check ok Parameter check ok Open ssh connection ok Cluster status check ok Initializes observer work home ok Remote oceanbase-ce-4.0.0.0-100000272022110114.el7-42611dc51ca9bb28f36e60e4406ceea4a74914c7 repository install ok Remote oceanbase-ce-4.0.0.0-100000272022110114.el7-42611dc51ca9bb28f36e60e4406ceea4a74914c7 repository lib check ok lo deployed [root@my_ob ~]# obd cluster start lo Get local repositories ok Search plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer x [ERROR] (127.0.0.1): when production_mode is True, memory_limit can not be less then 16.0G [ERROR] (127.0.0.1) / not enough disk space. (Avail: 11.4G, Need: 20.0G) See https://www.oceanbase.com/product/ob-deployer/error-codes . 集群部署时没有问题,启动时报错了,一个是生产模式下内存不能小于16G,另一个是空间不能小于20G,第一个问题可以将配置文件中的生产模式改为false,第二个问题则需要设置ob工作目录到较大的文件系统上。删除现有部署重新部署后启动集群。[root@localhost ~]# obd cluster start lo Get local repositories ok Search plugins ok Open ssh connection ok Load cluster param plugin ok Check before start observer ok [WARN] (127.0.0.1) The recommended value of fs.aio-max-nr is 1048576 (Current value: 65536) [WARN] OBD-1007: (127.0.0.1) The recommended number of open files is 655350 (Current value: 65536) [WARN] (127.0.0.1) clog and data use the same disk (/) Start observer ok observer program health check ok Connect to observer ok Initialize cluster ok Wait for observer init ok +---------------------------------------------+ | observer | +-----------+---------+------+-------+--------+ | ip | version | port | zone | status | +-----------+---------+------+-------+--------+ | 127.0.0.1 | 4.0.0.0 | 2881 | zone1 | ACTIVE | +-----------+---------+------+-------+--------+ obclient -h127.0.0.1 -P2881 -uroot -Doceanbase lo running集群启动成功,可以使用obclient登录了[root@localhost ~]# obclient -h127.0.0.1 -P2881 -uroot -Doceanbase Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the OceanBase. Commands end with ; or \g. Your OceanBase connection id is 3221487617 Server version: OceanBase_CE 4.0.0.0 (r100000272022110114-6af7f9ae79cd0ecbafd4b1b88e2886ccdba0c3be) (Built Nov 1 2022 14:57:18) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.检查一下数据库的版本obclient [oceanbase]> select version(); +------------------------------+ | version() | +------------------------------+ | 5.7.25-OceanBase_CE-v4.0.0.0 | +------------------------------+ 1 row in set (0.001 sec)

Oracle-临时表空间和临时表空间组

1 临时表空间 临时表空间存储只在会话期间保存的临时数据。当排序操作不能再内存中完成时,临时表空间可以提高它们的并发性,同时也能提高操作排序期间的空间管理效率。 临时表空间用来存储下面四类对象: 中间的排序结果 临时表和临时索引 临时LOB 临时B-tree 临时表空间可以在实例之间共享,一个临时表空间内,一个实例的所有排序操作共享一个排序分段。第一个使用临时表空间排序的sql语句创建排序分段,这个排序分段在创建后一直存在,在实例关闭时,Oracle会释放排序分段。 从Oracle 12.2开始,用户可以创建本地管理的临时表空间。本地管理的临时表空间不能在实例间共享,只能用于sql语句的临时结果,比如排序,hash聚合,以及连接。这些结果只能在一个实例中访问。说到临时表空间,一般指的是共享临时表空间,如果是RAC,临时表空间通常创建在共享磁盘上。 数据库安装时默认会创建一个默认的临时表空间,名称为temp,这个表空间是所有用户的临时表空间。SQL> select distinct TABLESPACE_NAME from dba_temp_files; TABLESPACE_NAME ------------------------------ TEMP 临时表空间可以在用户级设置,也可以在数据库级设置 SQL> alter database default temporary tablespace TBS_TEMP1; Database altered. SQL> select distinct TEMPORARY_TABLESPACE from dba_users; TEMPORARY_TABLESPACE ------------------------------ TBS_TEMP1也可以在用户级设置SQL> alter user test temporary tablespace TBS_TEMP2; User altered. SQL> select USERNAME,TEMPORARY_TABLESPACE from dba_users where username='TEST'; USERNAME TEMPORARY_TABLESPACE ------------------------------ ------------------------------ TEST TBS_TEMP22 临时表空间组 表空间组使一个用户能够使用多个表空间内的临时空间。表空间组可能只用于临时表空间。一个表空间不足以存储排序结果的时,可以使用表空间组,特别是在一个表有很多分区的时候这种情况容易发生。表空间则也使一个单一并行操作的多个并行执行服务器使用不同的临时表空间。当一个用户有多个会话同时登录时,不同的会话也会使用不同的临时表空间。 表空间组不需要单独创建,在创建临时表空间时可以指定表空间组,后面创建的临时表空间也可以指定相同的组。表空间组在组内的最后一个临时表空间删除后被删除。 SQL> create temporary tablespace tbs_temp1 tempfile size 10M tablespace group tbg1; Tablespace created. SQL> create temporary tablespace tbs_temp2 tempfile size 10M tablespace group tbg1; Tablespace created. --创建了一个临时表空间组,里面有两个临时表空间,查询一个当前的表空间组 SQL> select * from DBA_TABLESPACE_GROUPS; GROUP_NAME TABLESPACE_NAME ------------------------------ ------------------------------ TBG1 TBS_TEMP1 TBG1 TBS_TEMP2用PL SQL dev连接至数据库另一个会话这两个会话的id分别是197和131,查询这两个会话使用的临时表空间SQL> select username, session_num, tablespace from v$sort_usage; USERNAME SESSION_NUM TABLESPACE ------------------------------ ----------- ------------------------------- TEST 23 TBS_TEMP2 TEST 23 TBS_TEMP2 TEST 53 TBS_TEMP1这里的session_num 对应v$session中的serial#,查询一下这两个会话的sidSQL> select sid, serial# from v$session where serial# in (23,53); SID SERIAL# ---------- ---------- 131 53 197 23

Docker社区版删除及离线安装

1 下载离线安装包 离线方式安装Dockers社区版可以使用二进制包或者rpm包,使用二进制包安装很简单,安装之后的配置稍微复杂点。使用rpm包安装比较繁琐、容易出错的是依赖包的下载和安装,只要安装包及其依赖包下载完整了,安装和配置都很简单。手动下载依赖包容易出错,这里使用yumdownloader一条命令下载安装包及其依赖包,操作简单,也不会出错。 使用yumdownloader的前提是yum-utils已经安装,使用下面命令查看这个工具是否安装。[root@ docker_package]# yum list installed |grep yum yum.noarch 4.7.0-4.el8 @System yum-utils.noarch 4.0.21-3.el8 @base 使用yumdownloader下载Docker-ce安装包及其依赖包,--resolve选项会解决依赖问题,下载需要安装的依赖包。[root@ ~]# yumdownloader --resolve --destdir=/root/docker_package/ docker-ce Last metadata expiration check: 0:03:06 ago on Thu 20 Oct 2022 09:32:14 AM CST. docker-ce-20.10.18-3.el8.x86_64.rpm 134 kB/s | 21 MB 02:40 这台机器上已经安装了Docker-ce,yumdownloader只下载了安装包,没有下载它的依赖包,要想下载全部的依赖包,看一下yumdownloader的帮助,是否有选项可以全部依赖包。[root@ ~]# yumdownloader --help usage: dnf download [-c [config file]] [-q] [-v] [--version] [--installroot [path]] [--nodocs] [--noplugins] [--enableplugin [plugin]] [--disableplugin [plugin]] [--releasever RELEASEVER] [--setopt SETOPTS] [--skip-broken] [-h] [--allowerasing] [-b | --nobest] [-C] [-R [minutes]] [-d [debug level]] [--debugsolver] [--showduplicates] [-e ERRORLEVEL] [--obsoletes] [--rpmverbosity [debug level name]] [-y] [--assumeno] [--enablerepo [repo]] [--disablerepo [repo] | --repo [repo]] [--enable | --disable] [-x [package]] [--disableexcludes [repo]] [--repofrompath [repo,path]] [--noautoremove] [--nogpgcheck] [--color COLOR] [--refresh] [-4] [-6] [--destdir DESTDIR] [--downloadonly] [--comment COMMENT] [--bugfix] [--enhancement] [--newpackage] [--security] [--advisory ADVISORY] [--bz BUGZILLA] [--cve CVES] [--sec-severity {Critical,Important,Moderate,Low}] [--forcearch ARCH] [--source] [--debuginfo] [--debugsource] [--arch [arch]] [--resolve] [--alldeps] [--url] [--urlprotocols {http,https,rsync,ftp}] packages [packages ...] Download package to current directory General DNF options: -c [config file], --config [config file] config file location -q, --quiet quiet operation -v, --verbose verbose operation --version show DNF version and exit --installroot [path] set install root --nodocs do not install documentations --noplugins disable all plugins --enableplugin [plugin] enable plugins by name --disableplugin [plugin] disable plugins by name --releasever RELEASEVER override the value of $releasever in config and repo files --setopt SETOPTS set arbitrary config and repo options --skip-broken resolve depsolve problems by skipping packages -h, --help, --help-cmd show command help --allowerasing allow erasing of installed packages to resolve dependencies -b, --best try the best available package versions in transactions. --nobest do not limit the transaction to the best candidate -C, --cacheonly run entirely from system cache, don't update cache -R [minutes], --randomwait [minutes] maximum command wait time -d [debug level], --debuglevel [debug level] debugging output level --debugsolver dumps detailed solving results into files --showduplicates show duplicates, in repos, in list/search commands -e ERRORLEVEL, --errorlevel ERRORLEVEL error output level --obsoletes enables dnf's obsoletes processing logic for upgrade or display capabilities that the package obsoletes for info, list and repoquery --rpmverbosity [debug level name] debugging output level for rpm -y, --assumeyes automatically answer yes for all questions --assumeno automatically answer no for all questions --enablerepo [repo] Enable additional repositories. List option. Supports globs, can be specified multiple times. --disablerepo [repo] Disable repositories. List option. Supports globs, can be specified multiple times. --repo [repo], --repoid [repo] enable just specific repositories by an id or a glob, can be specified multiple times --enable enable repos with config-manager command (automatically saves) --disable disable repos with config-manager command (automatically saves) -x [package], --exclude [package], --excludepkgs [package] exclude packages by name or glob --disableexcludes [repo], --disableexcludepkgs [repo] disable excludepkgs --repofrompath [repo,path] label and path to an additional repository to use (same path as in a baseurl), can be specified multiple times. --noautoremove disable removal of dependencies that are no longer --nogpgcheck disable gpg signature checking (if RPM policy allows) --color COLOR control whether color is used --refresh set metadata as expired before running the command -4 resolve to IPv4 addresses only -6 resolve to IPv6 addresses only --destdir DESTDIR, --downloaddir DESTDIR set directory to copy packages to --downloadonly only download packages --comment COMMENT add a comment to transaction --bugfix Include bugfix relevant packages, in updates --enhancement Include enhancement relevant packages, in updates --newpackage Include newpackage relevant packages, in updates --security Include security relevant packages, in updates --advisory ADVISORY, --advisories ADVISORY Include packages needed to fix the given advisory, in updates --bz BUGZILLA, --bzs BUGZILLA Include packages needed to fix the given BZ, in updates --cve CVES, --cves CVES Include packages needed to fix the given CVE, in updates --sec-severity {Critical,Important,Moderate,Low}, --secseverity {Critical,Important,Moderate,Low} Include security relevant packages matching the severity, in updates --forcearch ARCH Force the use of an architecture Download command-specific options: --source download the src.rpm instead --debuginfo download the -debuginfo package instead --debugsource download the -debugsource package instead --arch [arch], --archlist [arch] limit the query to packages of given architectures. --resolve resolve and download needed dependencies --alldeps when running with --resolve, download all dependencies (do not exclude already installed ones) --url, --urls print list of urls where the rpms can be downloaded instead of downloading --urlprotocols {http,https,rsync,ftp} when running with --url, limit to specific protocols packages packages to download可以看到有个alldeps选项,这个选项的说明如下:--alldeps             when running with --resolve, download all dependencies                              (do not exclude already installed ones) 这个选项同--resolve选项一起使用,下载所有依赖,不排除已经安装的依赖包。 运行下面命令下载安装包及其所有依赖包[root@ ~]# yumdownloader --resolve --alldeps --destdir=/root/docker_package/ docker-ce Last metadata expiration check: 0:08:01 ago on Thu 20 Oct 2022 09:32:14 AM CST. [SKIPPED] docker-ce-20.10.18-3.el8.x86_64.rpm: Already downloaded (2/199): libsemanage-2.9-6.el8.x86_64.rpm 1.4 MB/s | 165 kB 00:00 (3/199): libselinux-2.9-5.el8.x86_64.rpm 1.2 MB/s | 165 kB 00:00 (4/199): libselinux-utils-2.9-5.el8.x86_64.rpm 1.6 MB/s | 243 kB 00:00 (5/199): python3-pip-wheel-9.0.3-20.el8.noarch.rpm 10 MB/s | 1.0 MB 00:00 (6/199): libsepol-2.9-3.el8.x86_64.rpm 3.7 MB/s | 340 kB 00:00 (7/199): python3-policycoreutils-2.9-16.el8.noarch.rpm 18 MB/s | 2.2 MB 00:00 (8/199): libsigsegv-2.11-5.el8.x86_64.rpm 293 kB/s | 30 kB 00:00 (9/199): libsmartcols-2.32.1-28.el8.x86_64.rpm 1.4 MB/s | 177 kB 00:00 (10/199): slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64.rpm 503 kB/s | 51 kB 00:00 (11/199): python3-setools-4.3.0-2.el8.x86_64.rpm 8.7 MB/s | 626 kB 00:00 (12/199): acl-2.2.53-1.el8.x86_64.rpm 1.1 MB/s | 81 kB 00:00 (13/199): libssh-0.9.4-3.el8.x86_64.rpm 3.2 MB/s | 215 kB 00:00 (14/199): gmp-6.1.2-10.el8.x86_64.rpm 3.5 MB/s | 322 kB 00:00 (15/199): python3-setuptools-wheel-39.2.0-6.el8.noarch.rpm 5.5 MB/s | 289 kB 00:00 (16/199): libssh-config-0.9.4-3.el8.noarch.rpm 201 kB/s | 19 kB 00:00 (17/199): gnutls-3.6.16-4.el8.x86_64.rpm 14 MB/s | 1.0 MB 00:00 (18/199): grep-3.1-6.el8.x86_64.rpm 2.7 MB/s | 274 kB 00:00 (19/199): audit-libs-3.0-0.17.20191104git1c2f876.el8.x86_64.rpm 1.3 MB/s | 116 kB 00:00 (20/199): grub2-common-2.02-106.el8.noarch.rpm 7.1 MB/s | 891 kB 00:00 (21/199): grub2-tools-2.02-106.el8.x86_64.rpm 22 MB/s | 2.0 MB 00:00 (22/199): grub2-tools-minimal-2.02-106.el8.x86_64.rpm 2.5 MB/s | 210 kB 00:00 (23/199): grubby-8.40-42.el8.x86_64.rpm 611 kB/s | 49 kB 00:00 (24/199): rdma-core-35.0-1.el8.x86_64.rpm 1.0 MB/s | 59 kB 00:00 (25/199): libstdc++-8.5.0-4.el8_5.x86_64.rpm 6.3 MB/s | 453 kB 00:00 (26/199): gzip-1.9-12.el8.x86_64.rpm 2.3 MB/s | 167 kB 00:00 (27/199): basesystem-11-5.el8.noarch.rpm 126 kB/s | 10 kB 00:00 (28/199): bash-4.4.20-2.el8.x86_64.rpm 11 MB/s | 1.5 MB 00:00 (29/199): hwdata-0.314-8.10.el8.noarch.rpm 16 MB/s | 1.7 MB 00:00 (30/199): hardlink-1.3-6.el8.x86_64.rpm 138 kB/s | 29 kB 00:00 (31/199): readline-7.0-10.el8.x86_64.rpm 2.1 MB/s | 199 kB 00:00 (32/199): rpm-4.14.3-19.el8.x86_64.rpm 7.7 MB/s | 543 kB 00:00 (33/199): libxkbcommon-0.9.1-1.el8.x86_64.rpm 1.5 MB/s | 116 kB 00:00 (34/199): info-6.5-6.el8.x86_64.rpm 2.7 MB/s | 198 kB 00:00 (35/199): rpm-libs-4.14.3-19.el8.x86_64.rpm 4.5 MB/s | 344 kB 00:00 (36/199): libtasn1-4.13-3.el8.x86_64.rpm 589 kB/s | 76 kB 00:00 (37/199): brotli-1.0.6-3.el8.x86_64.rpm 5.5 MB/s | 323 kB 00:00 (38/199): libtirpc-1.1.4-5.el8.x86_64.rpm 1.6 MB/s | 112 kB 00:00 (39/199): iptables-1.8.4-20.el8.x86_64.rpm 6.6 MB/s | 585 kB 00:00 (40/199): bzip2-libs-1.0.6-26.el8.x86_64.rpm 585 kB/s | 48 kB 00:00 (41/199): iptables-libs-1.8.4-20.el8.x86_64.rpm 1.7 MB/s | 107 kB 00:00 (42/199): rpm-plugin-selinux-4.14.3-19.el8.x86_64.rpm 311 kB/s | 77 kB 00:00 (43/199): ca-certificates-2021.2.50-80.0.el8_4.noarch.rpm 7.1 MB/s | 390 kB 00:00 (44/199): centos-gpg-keys-8-3.el8.noarch.rpm 195 kB/s | 12 kB 00:00 (45/199): centos-linux-release-8.5-1.2111.el8.noarch.rpm 383 kB/s | 22 kB 00:00 (46/199): libunistring-0.9.9-3.el8.x86_64.rpm 3.6 MB/s | 422 kB 00:00 (47/199): centos-linux-repos-8-3.el8.noarch.rpm 264 kB/s | 20 kB 00:00 (48/199): checkpolicy-2.9-1.el8.x86_64.rpm 2.9 MB/s | 348 kB 00:00 (49/199): chkconfig-1.19.1-1.el8.x86_64.rpm 1.9 MB/s | 198 kB 00:00 (50/199): libutempter-1.1.6-14.el8.x86_64.rpm 235 kB/s | 32 kB 00:00 (51/199): libuuid-2.32.1-28.el8.x86_64.rpm 1.3 MB/s | 96 kB 00:00 (52/199): coreutils-common-8.30-12.el8.x86_64.rpm 20 MB/s | 2.0 MB 00:00 (53/199): libverto-0.3.0-5.el8.x86_64.rpm 305 kB/s | 24 kB 00:00 (54/199): coreutils-8.30-12.el8.x86_64.rpm 7.1 MB/s | 1.2 MB 00:00 (55/199): cpio-2.12-10.el8.x86_64.rpm 3.8 MB/s | 265 kB 00:00 (56/199): cracklib-2.9.6-15.el8.x86_64.rpm 1.0 MB/s | 93 kB 00:00 (57/199): sed-4.5-2.el8.x86_64.rpm 5.2 MB/s | 298 kB 00:00 (58/199): selinux-policy-3.14.3-80.el8_5.2.noarch.rpm 8.2 MB/s | 636 kB 00:00 (59/199): cracklib-dicts-2.9.6-15.el8.x86_64.rpm 21 MB/s | 4.0 MB 00:00 (60/199): libxcrypt-4.1.1-6.el8.x86_64.rpm 740 kB/s | 73 kB 00:00 (61/199): crypto-policies-scripts-20210617-1.gitc776d3e.el8.noarch.rpm 1.6 MB/s | 83 kB 00:00 (62/199): crypto-policies-20210617-1.gitc776d3e.el8.noarch.rpm 630 kB/s | 63 kB 00:00 (63/199): libxml2-2.9.7-9.el8_4.2.x86_64.rpm 10 MB/s | 696 kB 00:00 (64/199): json-c-0.13.1-2.el8.x86_64.rpm 673 kB/s | 40 kB 00:00 (65/199): cryptsetup-libs-2.3.3-4.el8.x86_64.rpm 6.7 MB/s | 470 kB 00:00 (66/199): kbd-2.0.4-10.el8.x86_64.rpm 5.3 MB/s | 390 kB 00:00 (67/199): kbd-legacy-2.0.4-10.el8.noarch.rpm 6.8 MB/s | 481 kB 00:00 (68/199): kbd-misc-2.0.4-10.el8.noarch.rpm 17 MB/s | 1.5 MB 00:00 (69/199): curl-7.61.1-22.el8.x86_64.rpm 4.6 MB/s | 351 kB 00:00 (70/199): libzstd-1.4.4-1.el8.x86_64.rpm 3.3 MB/s | 266 kB 00:00 (71/199): setup-2.12.2-6.el8.noarch.rpm 2.4 MB/s | 181 kB 00:00 (72/199): shadow-utils-4.6-14.el8.x86_64.rpm 8.1 MB/s | 1.2 MB 00:00 (73/199): shared-mime-info-1.9-3.el8.x86_64.rpm 2.5 MB/s | 329 kB 00:00 (74/199): selinux-policy-targeted-3.14.3-80.el8_5.2.noarch.rpm 36 MB/s | 15 MB 00:00 (75/199): dbus-1.12.8-14.el8.x86_64.rpm 196 kB/s | 41 kB 00:00 (76/199): cyrus-sasl-lib-2.1.27-5.el8.x86_64.rpm 552 kB/s | 123 kB 00:00 (77/199): dbus-daemon-1.12.8-14.el8.x86_64.rpm 2.6 MB/s | 240 kB 00:00 (78/199): dbus-common-1.12.8-14.el8.noarch.rpm 468 kB/s | 46 kB 00:00 (79/199): lua-libs-5.3.4-12.el8.x86_64.rpm 1.3 MB/s | 118 kB 00:00 (80/199): dbus-libs-1.12.8-14.el8.x86_64.rpm 2.6 MB/s | 184 kB 00:00 (81/199): dbus-tools-1.12.8-14.el8.x86_64.rpm 1.2 MB/s | 85 kB 00:00 (82/199): sqlite-libs-3.26.0-15.el8.x86_64.rpm 5.9 MB/s | 581 kB 00:00 (83/199): device-mapper-1.02.177-10.el8.x86_64.rpm 6.3 MB/s | 377 kB 00:00 (84/199): lz4-libs-1.8.3-3.el8_4.x86_64.rpm 1.0 MB/s | 66 kB 00:00 (85/199): xkeyboard-config-2.28-1.el8.noarch.rpm 8.3 MB/s | 782 kB 00:00 (86/199): device-mapper-libs-1.02.177-10.el8.x86_64.rpm 5.9 MB/s | 409 kB 00:00 (87/199): kmod-25-18.el8.x86_64.rpm 2.1 MB/s | 126 kB 00:00 (88/199): keyutils-libs-1.5.10-9.el8.x86_64.rpm 180 kB/s | 34 kB 00:00 (89/199): memstrack-0.1.11-1.el8.x86_64.rpm 648 kB/s | 48 kB 00:00 (90/199): diffutils-3.6-6.el8.x86_64.rpm 2.3 MB/s | 358 kB 00:00 (91/199): kmod-libs-25-18.el8.x86_64.rpm 825 kB/s | 68 kB 00:00 (92/199): kpartx-0.8.4-17.el8.x86_64.rpm 1.3 MB/s | 113 kB 00:00 (93/199): dracut-049-191.git20210920.el8.x86_64.rpm 4.6 MB/s | 374 kB 00:00 (94/199): mpfr-3.1.6-1.el8.x86_64.rpm 1.9 MB/s | 221 kB 00:00 (95/199): ncurses-6.1-9.20180224.el8.x86_64.rpm 3.0 MB/s | 387 kB 00:00 (96/199): krb5-libs-1.18.2-14.el8.x86_64.rpm 5.4 MB/s | 840 kB 00:00 (97/199): ncurses-base-6.1-9.20180224.el8.noarch.rpm 1.3 MB/s | 81 kB 00:00 (98/199): libacl-2.2.53-1.el8.x86_64.rpm 546 kB/s | 35 kB 00:00 (99/199): ncurses-libs-6.1-9.20180224.el8.x86_64.rpm 5.7 MB/s | 334 kB 00:00 (100/199): elfutils-debuginfod-client-0.185-1.el8.x86_64.rpm 906 kB/s | 66 kB 00:00 (101/199): libarchive-3.3.3-1.el8.x86_64.rpm 6.3 MB/s | 359 kB 00:00 (102/199): elfutils-default-yama-scope-0.185-1.el8.noarch.rpm 937 kB/s | 49 kB 00:00 (103/199): elfutils-libelf-0.185-1.el8.x86_64.rpm 3.3 MB/s | 221 kB 00:00 (104/199): nettle-3.4.1-7.el8.x86_64.rpm 4.5 MB/s | 301 kB 00:00 (105/199): elfutils-libs-0.185-1.el8.x86_64.rpm 2.9 MB/s | 292 kB 00:00 (106/199): expat-2.2.5-4.el8.x86_64.rpm 1.1 MB/s | 111 kB 00:00 (107/199): libattr-2.4.48-3.el8.x86_64.rpm 376 kB/s | 27 kB 00:00 (108/199): systemd-239-51.el8_5.2.x86_64.rpm 21 MB/s | 3.6 MB 00:00 (109/199): file-5.33-20.el8.x86_64.rpm 1.1 MB/s | 77 kB 00:00 (110/199): file-libs-5.33-20.el8.x86_64.rpm 5.4 MB/s | 543 kB 00:00 (111/199): filesystem-3.8-6.el8.x86_64.rpm 11 MB/s | 1.1 MB 00:00 (112/199): libblkid-2.32.1-28.el8.x86_64.rpm 2.9 MB/s | 217 kB 00:00 (113/199): findutils-4.6.0-20.el8.x86_64.rpm 4.2 MB/s | 528 kB 00:00 (114/199): systemd-pam-239-51.el8_5.2.x86_64.rpm 6.9 MB/s | 477 kB 00:00 (115/199): libcap-2.26-5.el8.x86_64.rpm 981 kB/s | 60 kB 00:00 (116/199): systemd-libs-239-51.el8_5.2.x86_64.rpm 11 MB/s | 1.1 MB 00:00 (117/199): libcap-ng-0.7.11-1.el8.x86_64.rpm 550 kB/s | 33 kB 00:00 (118/199): tar-1.30-5.el8.x86_64.rpm 9.3 MB/s | 838 kB 00:00 (119/199): fuse-common-3.2.1-12.el8.x86_64.rpm 1.3 MB/s | 21 kB 00:00 (120/199): systemd-udev-239-51.el8_5.2.x86_64.rpm 12 MB/s | 1.6 MB 00:00 (121/199): libcgroup-0.41-19.el8.x86_64.rpm 986 kB/s | 70 kB 00:00 (122/199): libcom_err-1.45.6-2.el8.x86_64.rpm 822 kB/s | 49 kB 00:00 (123/199): fuse3-3.2.1-12.el8.x86_64.rpm 676 kB/s | 50 kB 00:00 (124/199): fuse3-libs-3.2.1-12.el8.x86_64.rpm 1.5 MB/s | 94 kB 00:00 (125/199): libcroco-0.6.12-4.el8_2.1.x86_64.rpm 1.5 MB/s | 113 kB 00:00 (126/199): libcurl-7.61.1-22.el8.x86_64.rpm 4.7 MB/s | 301 kB 00:00 (127/199): openldap-2.4.46-18.el8.x86_64.rpm 3.8 MB/s | 352 kB 00:00 (128/199): trousers-0.3.15-1.el8.x86_64.rpm 2.2 MB/s | 152 kB 00:00 (129/199): gawk-4.2.1-2.el8.x86_64.rpm 9.3 MB/s | 1.1 MB 00:00 (130/199): libdb-5.3.28-42.el8_4.x86_64.rpm 7.2 MB/s | 751 kB 00:00 (131/199): gdbm-1.18-1.el8.x86_64.rpm 754 kB/s | 130 kB 00:00 (132/199): libdb-utils-5.3.28-42.el8_4.x86_64.rpm 2.0 MB/s | 150 kB 00:00 (133/199): trousers-lib-0.3.15-1.el8.x86_64.rpm 3.2 MB/s | 168 kB 00:00 (134/199): fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64.rpm 1.1 MB/s | 73 kB 00:00 (135/199): openssl-1.1.1k-5.el8_5.x86_64.rpm 8.0 MB/s | 709 kB 00:00 (136/199): gdbm-libs-1.18-1.el8.x86_64.rpm 324 kB/s | 60 kB 00:00 (137/199): tzdata-2021e-1.el8.noarch.rpm 5.0 MB/s | 474 kB 00:00 (138/199): gettext-0.19.8.1-17.el8.x86_64.rpm 8.5 MB/s | 1.1 MB 00:00 (139/199): openssl-libs-1.1.1k-5.el8_5.x86_64.rpm 12 MB/s | 1.5 MB 00:00 (140/199): gettext-libs-0.19.8.1-17.el8.x86_64.rpm 3.2 MB/s | 314 kB 00:00 (141/199): openssl-pkcs11-0.4.10-2.el8.x86_64.rpm 1.3 MB/s | 66 kB 00:00 (142/199): os-prober-1.74-9.el8.x86_64.rpm 853 kB/s | 51 kB 00:00 (143/199): p11-kit-0.23.22-1.el8.x86_64.rpm 4.7 MB/s | 324 kB 00:00 (144/199): util-linux-2.32.1-28.el8.x86_64.rpm 16 MB/s | 2.5 MB 00:00 (145/199): glib2-2.56.4-156.el8.x86_64.rpm 15 MB/s | 2.5 MB 00:00 (146/199): libfdisk-2.32.1-28.el8.x86_64.rpm 2.7 MB/s | 251 kB 00:00 (147/199): p11-kit-trust-0.23.22-1.el8.x86_64.rpm 734 kB/s | 137 kB 00:00 (148/199): pam-1.3.1-15.el8.x86_64.rpm 4.1 MB/s | 739 kB 00:00 (149/199): glibc-2.28-164.el8.x86_64.rpm 19 MB/s | 3.6 MB 00:00 (150/199): glibc-common-2.28-164.el8.x86_64.rpm 13 MB/s | 1.3 MB 00:00 (151/199): libffi-3.1-22.el8.x86_64.rpm 481 kB/s | 37 kB 00:00 (152/199): glibc-all-langpacks-2.28-164.el8.x86_64.rpm 38 MB/s | 25 MB 00:00 (153/199): which-2.21-16.el8.x86_64.rpm 148 kB/s | 49 kB 00:00 (154/199): libgcc-8.5.0-4.el8_5.x86_64.rpm 1.3 MB/s | 79 kB 00:00 (155/199): pciutils-3.7.0-1.el8.x86_64.rpm 1.6 MB/s | 105 kB 00:00 (156/199): libgcrypt-1.8.5-6.el8.x86_64.rpm 4.9 MB/s | 463 kB 00:00 (157/199): pciutils-libs-3.7.0-1.el8.x86_64.rpm 906 kB/s | 54 kB 00:00 (158/199): pcre-8.42-6.el8.x86_64.rpm 2.9 MB/s | 211 kB 00:00 (159/199): xz-5.2.4-3.el8.x86_64.rpm 2.1 MB/s | 153 kB 00:00 (160/199): docker-ce-rootless-extras-20.10.18-3.el8.x86_64.rpm 133 kB/s | 4.6 MB 00:35 (161/199): libgomp-8.5.0-4.el8_5.x86_64.rpm 1.8 MB/s | 206 kB 00:00 (162/199): xz-libs-5.2.4-3.el8.x86_64.rpm 647 kB/s | 94 kB 00:00 (163/199): libgpg-error-1.31-1.el8.x86_64.rpm 2.5 MB/s | 242 kB 00:00 (164/199): pcre2-10.32-2.el8.x86_64.rpm 1.7 MB/s | 246 kB 00:00 (165/199): zlib-1.2.11-17.el8.x86_64.rpm 972 kB/s | 102 kB 00:00 (166/199): docker-scan-plugin-0.17.0-3.el8.x86_64.rpm 132 kB/s | 3.8 MB 00:29 (167/199): libibverbs-35.0-1.el8.x86_64.rpm 2.1 MB/s | 335 kB 00:00 (168/199): libidn2-2.2.0-1.el8.x86_64.rpm 1.0 MB/s | 94 kB 00:00 (169/199): libkcapi-1.2.0-2.el8.x86_64.rpm 913 kB/s | 48 kB 00:00 (170/199): libkcapi-hmaccalc-1.2.0-2.el8.x86_64.rpm 478 kB/s | 31 kB 00:00 (171/199): libmnl-1.0.4-6.el8.x86_64.rpm 173 kB/s | 30 kB 00:00 (172/199): pigz-2.4-4.el8.x86_64.rpm 1.0 MB/s | 79 kB 00:00 (173/199): platform-python-3.6.8-41.el8.x86_64.rpm 1.4 MB/s | 85 kB 00:00 (174/199): platform-python-pip-9.0.3-20.el8.noarch.rpm 13 MB/s | 1.7 MB 00:00 (175/199): libmount-2.32.1-28.el8.x86_64.rpm 2.9 MB/s | 234 kB 00:00 (176/199): platform-python-setuptools-39.2.0-6.el8.noarch.rpm 6.5 MB/s | 632 kB 00:00 (177/199): policycoreutils-2.9-16.el8.x86_64.rpm 2.5 MB/s | 373 kB 00:00 (178/199): container-selinux-2.167.0-1.module_el8.5.0+911+f19012f9.noarch.rpm 574 kB/s | 54 kB 00:00 (179/199): libnetfilter_conntrack-1.0.6-5.el8.x86_64.rpm 742 kB/s | 65 kB 00:00 (180/199): libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64.rpm 1.4 MB/s | 70 kB 00:00 (181/199): policycoreutils-python-utils-2.9-16.el8.noarch.rpm 3.3 MB/s | 252 kB 00:00 (182/199): libnfnetlink-1.0.1-13.el8.x86_64.rpm 218 kB/s | 33 kB 00:00 (183/199): libnftnl-1.1.5-4.el8.x86_64.rpm 1.0 MB/s | 83 kB 00:00 (184/199): popt-1.18-1.el8.x86_64.rpm 1.0 MB/s | 61 kB 00:00 (185/199): libnghttp2-1.33.0-3.el8_2.1.x86_64.rpm 1.1 MB/s | 77 kB 00:00 (186/199): libnl3-3.5.0-1.el8.x86_64.rpm 4.6 MB/s | 320 kB 00:00 (187/199): procps-ng-3.3.15-6.el8.x86_64.rpm 4.1 MB/s | 329 kB 00:00 (188/199): libnsl2-1.2.0-2.20180605git4a062cf.el8.x86_64.rpm 358 kB/s | 58 kB 00:00 (189/199): publicsuffix-list-dafsa-20180723-1.el8.noarch.rpm 600 kB/s | 56 kB 00:00 (190/199): python3-audit-3.0-0.17.20191104git1c2f876.el8.x86_64.rpm 1.7 MB/s | 86 kB 00:00 (191/199): libpcap-1.9.1-5.el8.x86_64.rpm 2.3 MB/s | 169 kB 00:00 (192/199): libpsl-0.20.2-6.el8.x86_64.rpm 1.0 MB/s | 61 kB 00:00 (193/199): libpwquality-1.4.4-3.el8.x86_64.rpm 1.3 MB/s | 107 kB 00:00 (194/199): python3-libs-3.6.8-41.el8.x86_64.rpm 31 MB/s | 7.8 MB 00:00 (195/199): python3-libselinux-2.9-5.el8.x86_64.rpm 3.7 MB/s | 283 kB 00:00 (196/199): python3-libsemanage-2.9-6.el8.x86_64.rpm 1.7 MB/s | 127 kB 00:00 (197/199): libseccomp-2.5.1-1.el8.x86_64.rpm 1.1 MB/s | 71 kB 00:00 (198/199): docker-ce-cli-20.10.18-3.el8.x86_64.rpm 133 kB/s | 30 MB 03:46 (199/199): containerd.io-1.6.8-3.1.el8.x86_64.rpm 133 kB/s | 33 MB 04:14所有的安装包及依赖包总计有199个之多,这里面有依赖的依赖,如果手动下载......2 删除已安装的docker查看已安装的docker包[root@ ~]# yum list installed |grep docker containerd.io.x86_64 1.6.7-3.1.el8 @docker-ce-stable docker-ce.x86_64 3:20.10.17-3.el8 @docker-ce-stable docker-ce-cli.x86_64 1:20.10.17-3.el8 @docker-ce-stable docker-ce-rootless-extras.x86_64 20.10.17-3.el8 @docker-ce-stable docker-scan-plugin.x86_64 0.17.0-3.el8 @docker-ce-stable先删除docker-ce[root@ ~]# yum -y remove docker-ce.x86_64 Dependencies resolved. ======================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================== Removing: docker-ce x86_64 3:20.10.17-3.el8 @docker-ce-stable 95 M Removing unused dependencies: containerd.io x86_64 1.6.7-3.1.el8 @docker-ce-stable 125 M docker-ce-rootless-extras x86_64 20.10.17-3.el8 @docker-ce-stable 16 M fuse-common x86_64 3.2.1-12.el8 @base 4.7 k fuse-overlayfs x86_64 1.7.1-1.module_el8.5.0+890+6b136101 @AppStream 145 k fuse3 x86_64 3.2.1-12.el8 @base 90 k fuse3-libs x86_64 3.2.1-12.el8 @base 279 k libcgroup x86_64 0.41-19.el8 @base 136 k libslirp x86_64 4.4.0-1.module_el8.5.0+890+6b136101 @AppStream 134 k slirp4netns x86_64 1.1.8-1.module_el8.5.0+890+6b136101 @AppStream 98 k Transaction Summary ======================================================================================================================== Remove 10 Packages Freed space: 237 M Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: docker-ce-3:20.10.17-3.el8.x86_64 1/1 Running scriptlet: docker-ce-3:20.10.17-3.el8.x86_64 1/10 Erasing : docker-ce-3:20.10.17-3.el8.x86_64 1/10 Running scriptlet: docker-ce-3:20.10.17-3.el8.x86_64 1/10 Running scriptlet: docker-ce-rootless-extras-20.10.17-3.el8.x86_64 2/10 Erasing : docker-ce-rootless-extras-20.10.17-3.el8.x86_64 2/10 Running scriptlet: docker-ce-rootless-extras-20.10.17-3.el8.x86_64 2/10 Running scriptlet: containerd.io-1.6.7-3.1.el8.x86_64 3/10 Erasing : containerd.io-1.6.7-3.1.el8.x86_64 3/10 Running scriptlet: containerd.io-1.6.7-3.1.el8.x86_64 3/10 Erasing : fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 4/10 Erasing : slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64 5/10 Erasing : fuse3-3.2.1-12.el8.x86_64 6/10 Erasing : fuse-common-3.2.1-12.el8.x86_64 7/10 Erasing : libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64 8/10 Erasing : fuse3-libs-3.2.1-12.el8.x86_64 9/10 Running scriptlet: fuse3-libs-3.2.1-12.el8.x86_64 9/10 Erasing : libcgroup-0.41-19.el8.x86_64 10/10 Running scriptlet: libcgroup-0.41-19.el8.x86_64 10/10 Verifying : containerd.io-1.6.7-3.1.el8.x86_64 1/10 Verifying : docker-ce-3:20.10.17-3.el8.x86_64 2/10 Verifying : docker-ce-rootless-extras-20.10.17-3.el8.x86_64 3/10 Verifying : fuse-common-3.2.1-12.el8.x86_64 4/10 Verifying : fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 5/10 Verifying : fuse3-3.2.1-12.el8.x86_64 6/10 Verifying : fuse3-libs-3.2.1-12.el8.x86_64 7/10 Verifying : libcgroup-0.41-19.el8.x86_64 8/10 Verifying : libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64 9/10 Verifying : slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64 10/10 Removed: containerd.io-1.6.7-3.1.el8.x86_64 docker-ce-3:20.10.17-3.el8.x86_64 docker-ce-rootless-extras-20.10.17-3.el8.x86_64 fuse-common-3.2.1-12.el8.x86_64 fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 fuse3-3.2.1-12.el8.x86_64 fuse3-libs-3.2.1-12.el8.x86_64 libcgroup-0.41-19.el8.x86_64 libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64 slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64然后删除docker-ce-cli包[root@ ~]# yum -y remove docker-ce-cli.x86_64 Dependencies resolved. ======================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================== Removing: docker-ce-cli x86_64 1:20.10.17-3.el8 @docker-ce-stable 140 M Removing dependent packages: docker-scan-plugin x86_64 0.17.0-3.el8 @docker-ce-stable 13 M Transaction Summary ======================================================================================================================== Remove 2 Packages Freed space: 153 M Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Erasing : docker-ce-cli-1:20.10.17-3.el8.x86_64 1/2 Running scriptlet: docker-scan-plugin-0.17.0-3.el8.x86_64 2/2 Erasing : docker-scan-plugin-0.17.0-3.el8.x86_64 2/2 Running scriptlet: docker-scan-plugin-0.17.0-3.el8.x86_64 2/2 Verifying : docker-ce-cli-1:20.10.17-3.el8.x86_64 1/2 Verifying : docker-scan-plugin-0.17.0-3.el8.x86_64 2/2 Removed: docker-ce-cli-1:20.10.17-3.el8.x86_64 docker-scan-plugin-0.17.0-3.el8.x86_64 Complete!查看一下系统上已无docker相关包。[root@ ~]# yum list installed |grep docker [root@ ~]#3 离线安装docker 安装及依赖包下载完之后,安装就比较简单了,切换到安装包所在目录,使用yum命令本地安装即可[root@ docker_package]# yum localinstall *.rpm Last metadata expiration check: 0:32:03 ago on Thu 20 Oct 2022 09:32:14 AM CST. Package acl-2.2.53-1.el8.x86_64 is already installed. Package audit-libs-3.0-0.17.20191104git1c2f876.el8.x86_64 is already installed. Package basesystem-11-5.el8.noarch is already installed. Package bash-4.4.20-2.el8.x86_64 is already installed. Package brotli-1.0.6-3.el8.x86_64 is already installed. Package bzip2-libs-1.0.6-26.el8.x86_64 is already installed. Package ca-certificates-2021.2.50-80.0.el8_4.noarch is already installed. Package centos-gpg-keys-1:8-3.el8.noarch is already installed. Package centos-linux-release-8.5-1.2111.el8.noarch is already installed. Package centos-linux-repos-8-3.el8.noarch is already installed. Package checkpolicy-2.9-1.el8.x86_64 is already installed. Package chkconfig-1.19.1-1.el8.x86_64 is already installed. Package container-selinux-2:2.167.0-1.module_el8.5.0+911+f19012f9.noarch is already installed. Package coreutils-8.30-12.el8.x86_64 is already installed. Package coreutils-common-8.30-12.el8.x86_64 is already installed. Package cpio-2.12-10.el8.x86_64 is already installed. Package cracklib-2.9.6-15.el8.x86_64 is already installed. Package cracklib-dicts-2.9.6-15.el8.x86_64 is already installed. Package crypto-policies-20210617-1.gitc776d3e.el8.noarch is already installed. Package crypto-policies-scripts-20210617-1.gitc776d3e.el8.noarch is already installed. Package cryptsetup-libs-2.3.3-4.el8.x86_64 is already installed. Package curl-7.61.1-22.el8.x86_64 is already installed. Package cyrus-sasl-lib-2.1.27-5.el8.x86_64 is already installed. Package dbus-1:1.12.8-14.el8.x86_64 is already installed. Package dbus-common-1:1.12.8-14.el8.noarch is already installed. Package dbus-daemon-1:1.12.8-14.el8.x86_64 is already installed. Package dbus-libs-1:1.12.8-14.el8.x86_64 is already installed. Package dbus-tools-1:1.12.8-14.el8.x86_64 is already installed. Package device-mapper-8:1.02.177-10.el8.x86_64 is already installed. Package device-mapper-libs-8:1.02.177-10.el8.x86_64 is already installed. Package diffutils-3.6-6.el8.x86_64 is already installed. Package dracut-049-191.git20210920.el8.x86_64 is already installed. Package elfutils-debuginfod-client-0.185-1.el8.x86_64 is already installed. Package elfutils-default-yama-scope-0.185-1.el8.noarch is already installed. Package elfutils-libelf-0.185-1.el8.x86_64 is already installed. Package elfutils-libs-0.185-1.el8.x86_64 is already installed. Package expat-2.2.5-4.el8.x86_64 is already installed. Package file-5.33-20.el8.x86_64 is already installed. Package file-libs-5.33-20.el8.x86_64 is already installed. Package filesystem-3.8-6.el8.x86_64 is already installed. Package findutils-1:4.6.0-20.el8.x86_64 is already installed. Package gawk-4.2.1-2.el8.x86_64 is already installed. Package gdbm-1:1.18-1.el8.x86_64 is already installed. Package gdbm-libs-1:1.18-1.el8.x86_64 is already installed. Package gettext-0.19.8.1-17.el8.x86_64 is already installed. Package gettext-libs-0.19.8.1-17.el8.x86_64 is already installed. Package glib2-2.56.4-156.el8.x86_64 is already installed. Package glibc-2.28-164.el8.x86_64 is already installed. Package glibc-common-2.28-164.el8.x86_64 is already installed. Package gmp-1:6.1.2-10.el8.x86_64 is already installed. Package gnutls-3.6.16-4.el8.x86_64 is already installed. Package grep-3.1-6.el8.x86_64 is already installed. Package grub2-common-1:2.02-106.el8.noarch is already installed. Package grub2-tools-1:2.02-106.el8.x86_64 is already installed. Package grub2-tools-minimal-1:2.02-106.el8.x86_64 is already installed. Package grubby-8.40-42.el8.x86_64 is already installed. Package gzip-1.9-12.el8.x86_64 is already installed. Package hardlink-1:1.3-6.el8.x86_64 is already installed. Package hwdata-0.314-8.10.el8.noarch is already installed. Package info-6.5-6.el8.x86_64 is already installed. Package iptables-1.8.4-20.el8.x86_64 is already installed. Package iptables-libs-1.8.4-20.el8.x86_64 is already installed. Package json-c-0.13.1-2.el8.x86_64 is already installed. Package kbd-2.0.4-10.el8.x86_64 is already installed. Package kbd-legacy-2.0.4-10.el8.noarch is already installed. Package kbd-misc-2.0.4-10.el8.noarch is already installed. Package keyutils-libs-1.5.10-9.el8.x86_64 is already installed. Package kmod-25-18.el8.x86_64 is already installed. Package kmod-libs-25-18.el8.x86_64 is already installed. Package kpartx-0.8.4-17.el8.x86_64 is already installed. Package krb5-libs-1.18.2-14.el8.x86_64 is already installed. Package libacl-2.2.53-1.el8.x86_64 is already installed. Package libarchive-3.3.3-1.el8.x86_64 is already installed. Package libattr-2.4.48-3.el8.x86_64 is already installed. Package libblkid-2.32.1-28.el8.x86_64 is already installed. Package libcap-2.26-5.el8.x86_64 is already installed. Package libcap-ng-0.7.11-1.el8.x86_64 is already installed. Package libcom_err-1.45.6-2.el8.x86_64 is already installed. Package libcroco-0.6.12-4.el8_2.1.x86_64 is already installed. Package libcurl-7.61.1-22.el8.x86_64 is already installed. Package libdb-5.3.28-42.el8_4.x86_64 is already installed. Package libdb-utils-5.3.28-42.el8_4.x86_64 is already installed. Package libfdisk-2.32.1-28.el8.x86_64 is already installed. Package libffi-3.1-22.el8.x86_64 is already installed. Package libgcc-8.5.0-4.el8_5.x86_64 is already installed. Package libgcrypt-1.8.5-6.el8.x86_64 is already installed. Package libgomp-8.5.0-4.el8_5.x86_64 is already installed. Package libgpg-error-1.31-1.el8.x86_64 is already installed. Package libibverbs-35.0-1.el8.x86_64 is already installed. Package libidn2-2.2.0-1.el8.x86_64 is already installed. Package libkcapi-1.2.0-2.el8.x86_64 is already installed. Package libkcapi-hmaccalc-1.2.0-2.el8.x86_64 is already installed. Package libmnl-1.0.4-6.el8.x86_64 is already installed. Package libmount-2.32.1-28.el8.x86_64 is already installed. Package libnetfilter_conntrack-1.0.6-5.el8.x86_64 is already installed. Package libnfnetlink-1.0.1-13.el8.x86_64 is already installed. Package libnftnl-1.1.5-4.el8.x86_64 is already installed. Package libnghttp2-1.33.0-3.el8_2.1.x86_64 is already installed. Package libnl3-3.5.0-1.el8.x86_64 is already installed. Package libnsl2-1.2.0-2.20180605git4a062cf.el8.x86_64 is already installed. Package libpcap-14:1.9.1-5.el8.x86_64 is already installed. Package libpsl-0.20.2-6.el8.x86_64 is already installed. Package libpwquality-1.4.4-3.el8.x86_64 is already installed. Package libseccomp-2.5.1-1.el8.x86_64 is already installed. Package libselinux-2.9-5.el8.x86_64 is already installed. Package libselinux-utils-2.9-5.el8.x86_64 is already installed. Package libsemanage-2.9-6.el8.x86_64 is already installed. Package libsepol-2.9-3.el8.x86_64 is already installed. Package libsigsegv-2.11-5.el8.x86_64 is already installed. Package libsmartcols-2.32.1-28.el8.x86_64 is already installed. Package libssh-0.9.4-3.el8.x86_64 is already installed. Package libssh-config-0.9.4-3.el8.noarch is already installed. Package libstdc++-8.5.0-4.el8_5.x86_64 is already installed. Package libtasn1-4.13-3.el8.x86_64 is already installed. Package libtirpc-1.1.4-5.el8.x86_64 is already installed. Package libunistring-0.9.9-3.el8.x86_64 is already installed. Package libutempter-1.1.6-14.el8.x86_64 is already installed. Package libuuid-2.32.1-28.el8.x86_64 is already installed. Package libverto-0.3.0-5.el8.x86_64 is already installed. Package libxcrypt-4.1.1-6.el8.x86_64 is already installed. Package libxkbcommon-0.9.1-1.el8.x86_64 is already installed. Package libxml2-2.9.7-9.el8_4.2.x86_64 is already installed. Package libzstd-1.4.4-1.el8.x86_64 is already installed. Package lua-libs-5.3.4-12.el8.x86_64 is already installed. Package lz4-libs-1.8.3-3.el8_4.x86_64 is already installed. Package memstrack-0.1.11-1.el8.x86_64 is already installed. Package mpfr-3.1.6-1.el8.x86_64 is already installed. Package ncurses-6.1-9.20180224.el8.x86_64 is already installed. Package ncurses-base-6.1-9.20180224.el8.noarch is already installed. Package ncurses-libs-6.1-9.20180224.el8.x86_64 is already installed. Package nettle-3.4.1-7.el8.x86_64 is already installed. Package openldap-2.4.46-18.el8.x86_64 is already installed. Package openssl-1:1.1.1k-5.el8_5.x86_64 is already installed. Package openssl-libs-1:1.1.1k-5.el8_5.x86_64 is already installed. Package openssl-pkcs11-0.4.10-2.el8.x86_64 is already installed. Package os-prober-1.74-9.el8.x86_64 is already installed. Package p11-kit-0.23.22-1.el8.x86_64 is already installed. Package p11-kit-trust-0.23.22-1.el8.x86_64 is already installed. Package pam-1.3.1-15.el8.x86_64 is already installed. Package pciutils-3.7.0-1.el8.x86_64 is already installed. Package pciutils-libs-3.7.0-1.el8.x86_64 is already installed. Package pcre2-10.32-2.el8.x86_64 is already installed. Package pcre-8.42-6.el8.x86_64 is already installed. Package pigz-2.4-4.el8.x86_64 is already installed. Package platform-python-3.6.8-41.el8.x86_64 is already installed. Package platform-python-pip-9.0.3-20.el8.noarch is already installed. Package platform-python-setuptools-39.2.0-6.el8.noarch is already installed. Package policycoreutils-2.9-16.el8.x86_64 is already installed. Package policycoreutils-python-utils-2.9-16.el8.noarch is already installed. Package popt-1.18-1.el8.x86_64 is already installed. Package procps-ng-3.3.15-6.el8.x86_64 is already installed. Package publicsuffix-list-dafsa-20180723-1.el8.noarch is already installed. Package python3-audit-3.0-0.17.20191104git1c2f876.el8.x86_64 is already installed. Package python3-libs-3.6.8-41.el8.x86_64 is already installed. Package python3-libselinux-2.9-5.el8.x86_64 is already installed. Package python3-libsemanage-2.9-6.el8.x86_64 is already installed. Package python3-pip-wheel-9.0.3-20.el8.noarch is already installed. Package python3-policycoreutils-2.9-16.el8.noarch is already installed. Package python3-setools-4.3.0-2.el8.x86_64 is already installed. Package python3-setuptools-wheel-39.2.0-6.el8.noarch is already installed. Package rdma-core-35.0-1.el8.x86_64 is already installed. Package readline-7.0-10.el8.x86_64 is already installed. Package rpm-4.14.3-19.el8.x86_64 is already installed. Package rpm-libs-4.14.3-19.el8.x86_64 is already installed. Package rpm-plugin-selinux-4.14.3-19.el8.x86_64 is already installed. Package sed-4.5-2.el8.x86_64 is already installed. Package selinux-policy-3.14.3-80.el8_5.2.noarch is already installed. Package selinux-policy-targeted-3.14.3-80.el8_5.2.noarch is already installed. Package setup-2.12.2-6.el8.noarch is already installed. Package shadow-utils-2:4.6-14.el8.x86_64 is already installed. Package shared-mime-info-1.9-3.el8.x86_64 is already installed. Package sqlite-libs-3.26.0-15.el8.x86_64 is already installed. Package systemd-239-51.el8_5.2.x86_64 is already installed. Package systemd-libs-239-51.el8_5.2.x86_64 is already installed. Package systemd-pam-239-51.el8_5.2.x86_64 is already installed. Package systemd-udev-239-51.el8_5.2.x86_64 is already installed. Package tar-2:1.30-5.el8.x86_64 is already installed. Package trousers-0.3.15-1.el8.x86_64 is already installed. Package trousers-lib-0.3.15-1.el8.x86_64 is already installed. Package tzdata-2021e-1.el8.noarch is already installed. Package util-linux-2.32.1-28.el8.x86_64 is already installed. Package which-2.21-16.el8.x86_64 is already installed. Package xkeyboard-config-2.28-1.el8.noarch is already installed. Package xz-5.2.4-3.el8.x86_64 is already installed. Package xz-libs-5.2.4-3.el8.x86_64 is already installed. Package zlib-1.2.11-17.el8.x86_64 is already installed. Dependencies resolved. ======================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================== Installing: containerd.io x86_64 1.6.8-3.1.el8 @commandline 33 M docker-ce x86_64 3:20.10.18-3.el8 @commandline 21 M docker-ce-cli x86_64 1:20.10.18-3.el8 @commandline 30 M docker-ce-rootless-extras x86_64 20.10.18-3.el8 @commandline 4.6 M docker-scan-plugin x86_64 0.17.0-3.el8 @commandline 3.8 M fuse-common x86_64 3.2.1-12.el8 @commandline 21 k fuse-overlayfs x86_64 1.7.1-1.module_el8.5.0+890+6b136101 @commandline 73 k fuse3 x86_64 3.2.1-12.el8 @commandline 50 k fuse3-libs x86_64 3.2.1-12.el8 @commandline 94 k glibc-all-langpacks x86_64 2.28-164.el8 @commandline 25 M libcgroup x86_64 0.41-19.el8 @commandline 70 k libslirp x86_64 4.4.0-1.module_el8.5.0+890+6b136101 @commandline 70 k slirp4netns x86_64 1.1.8-1.module_el8.5.0+890+6b136101 @commandline 51 k Transaction Summary ======================================================================================================================== Install 13 Packages Total size: 118 M Installed size: 780 M Is this ok [y/N]: y Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : docker-scan-plugin-0.17.0-3.el8.x86_64 1/13 Running scriptlet: docker-scan-plugin-0.17.0-3.el8.x86_64 1/13 Installing : docker-ce-cli-1:20.10.18-3.el8.x86_64 2/13 Running scriptlet: docker-ce-cli-1:20.10.18-3.el8.x86_64 2/13 Installing : libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64 3/13 Installing : slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64 4/13 Running scriptlet: libcgroup-0.41-19.el8.x86_64 5/13 Installing : libcgroup-0.41-19.el8.x86_64 5/13 Running scriptlet: libcgroup-0.41-19.el8.x86_64 5/13 Installing : fuse-common-3.2.1-12.el8.x86_64 6/13 Installing : fuse3-3.2.1-12.el8.x86_64 7/13 Installing : fuse3-libs-3.2.1-12.el8.x86_64 8/13 Running scriptlet: fuse3-libs-3.2.1-12.el8.x86_64 8/13 Installing : fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 9/13 Running scriptlet: fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 9/13 Installing : containerd.io-1.6.8-3.1.el8.x86_64 10/13 Running scriptlet: containerd.io-1.6.8-3.1.el8.x86_64 10/13 Installing : docker-ce-rootless-extras-20.10.18-3.el8.x86_64 11/13 Running scriptlet: docker-ce-rootless-extras-20.10.18-3.el8.x86_64 11/13 Installing : docker-ce-3:20.10.18-3.el8.x86_64 12/13 Running scriptlet: docker-ce-3:20.10.18-3.el8.x86_64 12/13 Installing : glibc-all-langpacks-2.28-164.el8.x86_64 13/13 Running scriptlet: glibc-all-langpacks-2.28-164.el8.x86_64 13/13 Verifying : containerd.io-1.6.8-3.1.el8.x86_64 1/13 Verifying : docker-ce-3:20.10.18-3.el8.x86_64 2/13 Verifying : docker-ce-cli-1:20.10.18-3.el8.x86_64 3/13 Verifying : docker-ce-rootless-extras-20.10.18-3.el8.x86_64 4/13 Verifying : docker-scan-plugin-0.17.0-3.el8.x86_64 5/13 Verifying : fuse3-3.2.1-12.el8.x86_64 6/13 Verifying : fuse3-libs-3.2.1-12.el8.x86_64 7/13 Verifying : fuse-common-3.2.1-12.el8.x86_64 8/13 Verifying : fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 9/13 Verifying : glibc-all-langpacks-2.28-164.el8.x86_64 10/13 Verifying : libcgroup-0.41-19.el8.x86_64 11/13 Verifying : libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64 12/13 Verifying : slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64 13/13 Installed: containerd.io-1.6.8-3.1.el8.x86_64 docker-ce-3:20.10.18-3.el8.x86_64 docker-ce-cli-1:20.10.18-3.el8.x86_64 docker-ce-rootless-extras-20.10.18-3.el8.x86_64 docker-scan-plugin-0.17.0-3.el8.x86_64 fuse-common-3.2.1-12.el8.x86_64 fuse-overlayfs-1.7.1-1.module_el8.5.0+890+6b136101.x86_64 fuse3-3.2.1-12.el8.x86_64 fuse3-libs-3.2.1-12.el8.x86_64 glibc-all-langpacks-2.28-164.el8.x86_64 libcgroup-0.41-19.el8.x86_64 libslirp-4.4.0-1.module_el8.5.0+890+6b136101.x86_64 slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64 Complete! 安装完后,docker并没有启动[root@ docker_package]# systectl status docker -bash: systectl: command not found [root@ docker_package]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: inactive (dead) Docs: https://docs.docker.com启动一下再查看状态[root@ docker_package]# systemctl start docker [root@ docker_package]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Thu 2022-10-20 10:06:11 CST; 58s ago Docs: https://docs.docker.com Main PID: 164837 (dockerd) Tasks: 7 Memory: 30.0M CGroup: /system.slice/docker.service └─164837 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock Oct 20 10:06:10 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:10.519097458+08:00" level=info msg="[gr> Oct 20 10:06:10 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:10.543296975+08:00" level=warning msg="> Oct 20 10:06:10 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:10.543326630+08:00" level=warning msg="> Oct 20 10:06:10 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:10.543536947+08:00" level=info msg="Loa> Oct 20 10:06:11 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:11.016974140+08:00" level=info msg="Def> Oct 20 10:06:11 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:11.162497151+08:00" level=info msg="Loa> Oct 20 10:06:11 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:11.202560686+08:00" level=info msg="Doc> Oct 20 10:06:11 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:11.202651469+08:00" level=info msg="Dae> Oct 20 10:06:11 iZ2ze0t8khaprrpfvmevjiZ systemd[1]: Started Docker Application Container Engine. Oct 20 10:06:11 iZ2ze0t8khaprrpfvmevjiZ dockerd[164837]: time="2022-10-20T10:06:11.236514596+08:00" level=info msg="API>查看一下docker版本[root@ docker_package]# docker version Client: Docker Engine - Community Version: 20.10.18 API version: 1.41 Go version: go1.18.6 Git commit: b40c2f6 Built: Thu Sep 8 23:11:56 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.18 API version: 1.41 (minimum version 1.12) Go version: go1.18.6 Git commit: e42327a Built: Thu Sep 8 23:10:04 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.8 GitCommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 runc: Version: 1.1.4 GitCommit: v1.1.4-0-g5fd4c4d docker-init: Version: 0.19.0 GitCommit: de40ad0

使用jar命令管理jar包

1 查看jar包内容[root@ ~]# jar -tvf fastjson.jar |head -10 0 Wed Oct 05 13:34:30 CST 2022 META-INF/ 81 Wed Oct 05 13:34:30 CST 2022 META-INF/MANIFEST.MF 0 Wed Oct 05 13:34:10 CST 2022 META-INF/scm/ 0 Wed Oct 05 13:34:10 CST 2022 META-INF/scm/com.alibaba/ 0 Wed Oct 05 13:34:10 CST 2022 META-INF/scm/com.alibaba/fastjson/ 0 Wed Oct 05 13:34:10 CST 2022 com/ 0 Wed Oct 05 13:34:10 CST 2022 com/alibaba/ 0 Wed Oct 05 13:34:10 CST 2022 com/alibaba/fastjson/ 0 Wed Oct 05 13:34:10 CST 2022 com/alibaba/fastjson/serializer/ 0 Wed Oct 05 13:34:10 CST 2022 com/alibaba/fastjson/util/2 从jar包中抽取文件 从fastjson.jar包里抽取pom.xml文件[root@ ~]# jar -xvf fastjson.jar META-INF/maven/com.alibaba/fastjson/pom.xml inflated: META-INF/maven/com.alibaba/fastjson/pom.xml jar命令会在当前目录下创建要抽取文件的所有上级目录,抽取文件的路径同文件在jar包中的路径相同,如果文件是在jar包根目录下,抽取的文件会存储在当前目录下,这时要注意当前目录下同名的文件会被覆盖,需要备份或改名。3 更新jar包中的文件 更新jar包中的文件时要注意的时文件的路径,命令中文件的路径同文件在jar包中的路径相同。在META的上级目录执行下面的命令。 pom文件抽取出来后已经编辑过,加了一行注释信息[root@ ~]# jar -uvf fastjson.jar META-INF/maven/com.alibaba/fastjson/pom.xml adding: META-INF/maven/com.alibaba/fastjson/pom.xml(in = 11669) (out= 1855)(deflated 84%) 查看一下jar包里的文件[root@ ~]# jar -tvf fastjson.jar |grep pom 11669 Wed Oct 19 10:23:06 CST 2022 META-INF/maven/com.alibaba/fastjson/pom.xml 55 Wed Oct 05 13:34:30 CST 2022 META-INF/maven/com.alibaba/fastjson/pom.properties 从文件里时间里可以看到pom.xml是刚在更新的文件。从jar报里提取这个文件,看一下这个文件里面的内容[root@ ~]# jar -xvf fastjson.jar META-INF/maven/com.alibaba/fastjson/pom.xml inflated: META-INF/maven/com.alibaba/fastjson/pom.xml [root@ ~]# head -10 META-INF/maven/com.alibaba/fastjson/pom.xml <?xml version="1.0" encoding="UTF-8"?> <!-- edit by test user -> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <parent> <groupId>com.alibaba.fastjson2</groupId> <artifactId>fastjson2-parent</artifactId> <version>2.0.15</version>4 在jar包中增加文件 jar包中加文件也使用-u选项,当文件在jar包中不存在时,文件就会加入到jar包中。[root@ ~]# jar -uvf fastjson.jar META-INF/help.info adding: META-INF/help.info(in = 23) (out= 23)(deflated 0%) [root@ ~]# jar -tvf fastjson.jar |grep help 23 Wed Oct 19 10:33:26 CST 2022 META-INF/help.info​​​

Oracle RAC:多路径配置Centos操作系统

1  检查多路径软件是否安装操作系统版本如下:[root@my_ob ~]# cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core)检查device-mapper是否安装[root@ ~]# rpm -qa |grep device device-mapper-event-libs-1.02.177-10.el8.x86_64 device-mapper-event-1.02.177-10.el8.x86_64 device-mapper-multipath-0.8.4-17.el8.x86_64 device-mapper-libs-1.02.177-10.el8.x86_64 device-mapper-1.02.177-10.el8.x86_64 device-mapper-multipath-libs-0.8.4-17.el8.x86_64 device-mapper-persistent-data-0.9.0-4.el8.x86_64 如未安装,使用下面命令安装:yum -y install device-mapper*2 获得磁盘的wwid2.1 查看当前系统上的块设备[root@my_ob sbin]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 60G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 59G 0 part ├─centos-root 253:0 0 35.6G 0 lvm / ├─centos-swap 253:1 0 6G 0 lvm [SWAP] └─centos-home 253:2 0 17.4G 0 lvm /home sr0 11:0 1 1024M 0 rom2.2 查看块设备scsi_id[root@my_ob udev]# pwd /lib/udev [root@my_ob udev]# ./scsi_id -g -u -s /block/sda ##sda为块设备名称3 编辑多路径配置文件  vi /etc/multipath.conf #1 下面一段禁止为本地磁盘启用多路径 devnode_blacklist { #devnode "*" devnode "hda" wwid 3600508e000000000dc7200032e08af0b #2 下面一段为磁盘指定固定名称 multipaths { multipath { wwid "360002ac0000000000000000200045dbf" alias data multipath { wwid "360002ac0000000000000000300078dbf" alias ocr1 multipath { wwid "360002ac0000000000000000400054dbf" alias ocr2 }4 reload多路径服务mulitipath -r5 重新扫描路径multipath -v26 查看路径multipath -ll7 在线加入硬盘 扫描磁盘,每个hba卡都要执行echo "- - -" > /sys/class/scsi_host/host0/scan 在/dev目录下找到新发现的磁盘,使用scsi_id命令获得wwid,编辑多路径配置文件为其设置别名,重载多路径软件,扫描合并路径,在/dev/mapper/、/dev/mpath/检查重新发现的路径 查看别名和wwid的对应文件:/etc/multipath/bindings

Oracle--自动维护任务

本文以Oracle11g为例,介绍了怎样查看Oracle数据库预先定义的维护任务,这些任务执行的时间,执行的操作。 由于Oracle定义的维护任务的信息分布在多个视图内,要获得关于Oracle 维护任务的信息就需要对这些视图进行查询。涉及的视图比较多,为了避免查询语句过于复杂,分成多个步骤来执行。1 查询预定义的自动化任务 通过DBA_AUTOTASK_OPERATION视图查询数据库定义的自动化任务及任务执行的操作。SQL> select CLIENT_NAME,OPERATION_NAME from DBA_AUTOTASK_OPERATION; CLIENT_NAME OPERATION_NAME ------------------------------------------------ ---------------------------------------------------------------- auto optimizer stats collection auto optimizer stats job auto space advisor auto space advisor job sql tuning advisor automatic sql tuning task2 查询任务执行的程序 从dba_scheduler_programs视图中可以查到每个程序执行的脚本或存储过程。 SQL> l 1* select OWNER,PROGRAM_NAME,PROGRAM_ACTION from dba_scheduler_programs SQL> / OWNE PROGRAM_NAME PROGRAM_ACTION ---- -------------------------------- ---------------------------------------------------------------- SYS JDM_XFORM_SEQ_PROGRAM SYS.dbms_jdm_internal.xform_seq_task SYS JDM_XFORM_PROGRAM SYS.dbms_jdm_internal.xform_task SYS JDM_TEST_PROGRAM SYS.dbms_jdm_internal.test_task SYS JDM_SQL_APPLY_PROGRAM SYS.dbms_jdm_internal.sql_apply_task SYS JDM_PROFILE_PROGRAM SYS.dbms_jdm_internal.profile_task SYS JDM_PREDICT_PROGRAM SYS.dbms_jdm_internal.PREDICT_TASK SYS JDM_IMPORT_PROGRAM SYS.dbms_jdm_internal.import_task SYS JDM_EXPORT_PROGRAM SYS.dbms_jdm_internal.export_task SYS JDM_EXPLAIN_PROGRAM SYS.dbms_jdm_internal.explain_task SYS JDM_BUILD_PROGRAM SYS.dbms_jdm_internal.build_task SYS HS_PARALLEL_SAMPLING sys.dbms_hs_parallel_metadata.table_sampling SYS AQ$_PROPAGATION_PROGRAM SYS.DBMS_AQADM_SYS.aq$_propagation_procedure SYS PURGE_LOG_PROG dbms_scheduler.auto_purge SYS BSLN_MAINTAIN_STATS_PROG begin if prvt_advisor.is_pack_enabled('DIAGNOSTIC') then dbsnmp. bsln_internal.maintain_statistics; end if; end; SYS AUTO_SQL_TUNING_PROG DECLARE ename VARCHAR2(30); BEGIN ename := dbms_sqltune.execute_tuning_task( 'SYS_AUTO_SQL_TUNING_TASK'); SYS AUTO_SPACE_ADVISOR_PROG dbms_space.auto_space_advisor_job_proc SYS FILE_WATCHER_PROGRAM dbms_isched.file_watch_job SYS GATHER_STATS_PROG dbms_stats.gather_database_stats_job_proc SYS ORA$AGE_AUTOTASK_DATA dbms_autotask_prvt.age 19 rows selected.3 查询任务执行的窗口SQL> l 1 select a.CLIENT_NAME,a.STATUS,a.WINDOW_GROUP,b.WINDOW_NAME 2 from dba_autotask_client a left join 3* DBA_SCHEDULER_WINGROUP_MEMBERS b on a.WINDOW_GROUP=b.WINDOW_GROUP_NAME order by 1 SQL> / CLIENT_NAME STATUS WINDOW_GROUP WINDOW_NAME ------------------------------------------------ -------- -------------------- ------------------------------ auto optimizer stats collection ENABLED ORA$AT_WGRP_OS SATURDAY_WINDOW auto optimizer stats collection ENABLED ORA$AT_WGRP_OS MONDAY_WINDOW auto optimizer stats collection ENABLED ORA$AT_WGRP_OS WEDNESDAY_WINDOW auto optimizer stats collection ENABLED ORA$AT_WGRP_OS TUESDAY_WINDOW auto optimizer stats collection ENABLED ORA$AT_WGRP_OS SUNDAY_WINDOW auto optimizer stats collection ENABLED ORA$AT_WGRP_OS FRIDAY_WINDOW auto optimizer stats collection ENABLED ORA$AT_WGRP_OS THURSDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA SATURDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA FRIDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA WEDNESDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA MONDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA SUNDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA TUESDAY_WINDOW auto space advisor ENABLED ORA$AT_WGRP_SA THURSDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ FRIDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ SUNDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ WEDNESDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ TUESDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ SATURDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ MONDAY_WINDOW sql tuning advisor ENABLED ORA$AT_WGRP_SQ THURSDAY_WINDOW 21 rows selected.这三个任务在周一到周日的七个维护窗口内都会执行。4 查看窗口的执行时间和执行间隔SQL> select WINDOW_NAME,REPEAT_INTERVAL,DURATION from DBA_SCHEDULER_WINDOWS; WINDOW_NAME REPEAT_INTERVAL DURATION ------------------------------ -------------------------------- --------------------------------------------------------------------------- MONDAY_WINDOW freq=daily;byday=MON;byhour=22;b +000 04:00:00 yminute=0; bysecond=0 TUESDAY_WINDOW freq=daily;byday=TUE;byhour=22;b +000 04:00:00 yminute=0; bysecond=0 WEDNESDAY_WINDOW freq=daily;byday=WED;byhour=22;b +000 04:00:00 yminute=0; bysecond=0 THURSDAY_WINDOW freq=daily;byday=THU;byhour=22;b +000 04:00:00 yminute=0; bysecond=0 FRIDAY_WINDOW freq=daily;byday=FRI;byhour=22;b +000 04:00:00 yminute=0; bysecond=0 SATURDAY_WINDOW freq=daily;byday=SAT;byhour=6;by +000 20:00:00 minute=0; bysecond=0 SUNDAY_WINDOW freq=daily;byday=SUN;byhour=6;by +000 20:00:00 minute=0; bysecond=0 WEEKNIGHT_WINDOW freq=daily;byday=MON,TUE,WED,THU +000 08:00:00 ,FRI;byhour=22;byminute=0; bysec ond=0 WEEKEND_WINDOW freq=daily;byday=SAT;byhour=0;by +002 00:00:00 minute=0;bysecond=0 9 rows selected. 这里DURATION列的前三个数字表时日期,后面表示时间。

MySQL--使用通用日志实现数据库审计

1 MySQL通用日志是什么? MySQL通用日志记录用户的连接和断开连接的行为,记录从客户端收到的每一台sql语句。通用日志里记录的信息包括连接的用户名和IP地址,语句执行的时间。在某些场景下可以用于数据库审计。 MySQL通用日志的主要问题是没有过滤或者选择选项,记录的信息量过大,可能会对存储空间造成压力和io造成压力,对数据库的性能也有一定的影响。当怀疑数据库内运行的语句有问题时,可以短时开启,定位到具体的语句、用户和IP地址。2 怎样MySQL通用日志的开启? 通过设置变量general-log来开启MySQL通用日志,general-log是动态变量,可以在数据库运行时开启。在数据库启动时用命令行开启的命令如下:mysqld_safe --user=mysql --general-log=on --datadir=/mysqldata &开启后general-log变量值如下mysql> show variables like '%general%'; +------------------+----------------------------------------+ | Variable_name | Value | +------------------+----------------------------------------+ | general_log | ON | | general_log_file | /mysqldata/iZ2ze0t8khaprrpfvmevjiZ.log | +------------------+----------------------------------------+ 2 rows in set (0.00 sec)从general_log_file变量的值可以看到,产生的通用日志(也称为为通用查询日志)一般也称为查询日志。位于操作系统的文件内,也可以查询日志存储在MySQL数据库的表内,设置以下变量即可。mysql> set global log_output='TABLE'; Query OK, 0 rows affected (0.00 sec)设置了这个变量值,慢查询日志也会存储到MySQL数据库的表内,查询日志存储在mysql数据库下名为general_log的表内,mysql> show tables like '%general%'; +-----------------------------+ | Tables_in_mysql (%general%) | +-----------------------------+ | general_log | +-----------------------------+ 1 row in set (0.01 sec)3 怎样查看产生的慢查询日志 mysql数据库下general_log表的信息如下mysql> desc general_log; +--------------+---------------------+------+-----+----------------------+--------------------------------+ | Field | Type | Null | Key | Default | Extra | +--------------+---------------------+------+-----+----------------------+--------------------------------+ | event_time | timestamp(6) | NO | | CURRENT_TIMESTAMP(6) | on update CURRENT_TIMESTAMP(6) | | user_host | mediumtext | NO | | NULL | | | thread_id | bigint(21) unsigned | NO | | NULL | | | server_id | int(10) unsigned | NO | | NULL | | | command_type | varchar(64) | NO | | NULL | | | argument | mediumblob | NO | | NULL | | +--------------+---------------------+------+-----+----------------------+--------------------------------+ 6 rows in set (0.00 sec)查询这个表即可获得数据库的查询日志mysql> select * from general_log limit 10; +----------------------------+-------------------------------+-----------+-----------+--------------+-------------------------------------------------------------------------------------------------+ | event_time | user_host | thread_id | server_id | command_type | argument | +----------------------------+-------------------------------+-----------+-----------+--------------+-------------------------------------------------------------------------------------------------+ | 2022-09-14 14:47:00.066478 | root[root] @ localhost [] | 3 | 0 | Query | select * from general_log | | 2022-09-14 14:49:00.988472 | root[root] @ localhost [] | 3 | 0 | Query | GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' IDENTIFIED WITH 'mysql_native_password' AS '<secret>' | | 2022-09-14 14:49:18.585143 | root[root] @ localhost [] | 3 | 0 | Query | flush privileges | | 2022-09-14 14:53:15.055589 | [test] @ [100.104.224.3] | 4 | 0 | Connect | test@100.104.224.3 on information_schema using TCP/IP | | 2022-09-14 14:53:15.057256 | test[test] @ [100.104.224.3] | 4 | 0 | Query | SET autocommit=true | | 2022-09-14 14:53:15.060085 | test[test] @ [100.104.224.3] | 4 | 0 | Query | /* rds internal mark */ /* hdm internal mark */ SELECT /*+ MAX_EXECUTION_TIME(2000) */ 1 | | 2022-09-14 14:53:15.061626 | test[test] @ [100.104.224.3] | 4 | 0 | Quit | | | 2022-09-14 14:53:19.873779 | [test] @ [100.104.224.3] | 5 | 0 | Connect | test@100.104.224.3 on information_schema using TCP/IP | | 2022-09-14 14:53:19.875503 | test[test] @ [100.104.224.3] | 5 | 0 | Query | SET autocommit=true | | 2022-09-14 14:55:12.879961 | test[test] @ [100.104.224.3] | 5 | 0 | Query | /* rds internal mark */ /* hdm internal mark */ SELECT /*+ MAX_EXECUTION_TIME(2000) */ 1 | +----------------------------+-------------------------------+-----------+-----------+--------------+-------------------------------------------------------------------------------------------------+ 10 rows in set (0.00 sec)4 怎样清理通用日志? 由于通用日志的存储量非常大,需要定期清理,释放存储存储空间,最简单的清理办法是使用truncate命令清空表。 使用truncate命令清空表之前表占用的空间mysql> system ls -l /mysqldata/mysql/general* -rw-r----- 1 mysql mysql 35 Sep 14 14:38 /mysqldata/mysql/general_log.CSM -rw-r----- 1 mysql mysql 10264 Sep 14 14:44 /mysqldata/mysql/general_log.CSV -rw-r----- 1 mysql mysql 8776 Aug 9 14:07 /mysqldata/mysql/general_log.frm运行truncate命令mysql> truncate table general_log; Query OK, 0 rows affected (0.00 sec)清空后表占用的存储空间mysql> system ls -l /mysqldata/mysql/general* -rw-r----- 1 mysql mysql 35 Sep 14 14:47 /mysqldata/mysql/general_log.CSM -rw-r----- 1 mysql mysql 97 Sep 14 14:47 /mysqldata/mysql/general_log.CSV -rw-r----- 1 mysql mysql 8776 Aug 9 14:07 /mysqldata/mysql/general_log.frm其中general_log.CSV为表的数据文件,可以看出,空间已被清除。

Oracle --Oracle 11.2.0.4静默安装

1 下载安装包 Oracle 11.2.0.4的安装包共7个压缩文件,可以分别解压每个压缩文件,安装数据库时需要解压前两个安装包。第三个包是grid安装包,在安装rac时首先安装这个包。 前两个包的名字是:[oracle@my_ob tmp]$ ls -l p133* -rw-r--r-- 1 root root 1395582860 Sep 6 14:57 p13390677_112040_Linux-x86-64_1of7.zip -rw-r--r-- 1 root root 1151304589 Sep 7 09:57 p13390677_112040_Linux-x86-64_2of7.zip 用unzip命令解压这两个文件,默认这两个文件都会解压到当前目录的database目录下。要注意的是这两个包都要解压,如果直接压第一个包,Oracle软件的安装、监听器创建都会成功,创建数据库时会报找不到数据库模板错误。2 创建Oracle数据库组和用户 以下脚本在root用户下执行groupadd -g 500 oinstall groupadd -g 501 dba useradd -u 500 -g oinstall -G dba oracle创建用户后,给这个用户创建一个密码#passwd oracle3 创建目录,更改目录属主和权限mkdir -p /u01/app/oracle 这里可以只创建Oracle base目录,Oracle home目录和inventory目录在安装过程中会自动创建。 更改目录属主和权限mkdir -p /u01/app/oracle chown -R oracle:oinstall /u01/ chmod -R 755 /u01/ 4 编辑用户环境变量用vi编辑oracle 用户home目录下.bash_profile文件,在文件末尾加入以下内容。export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export ORACLE_SID=Oracle11g export PATH=$PATH:$ORACLE_HOME/bin export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib5 编辑oracle用户资源限制echo "oracle hard nproc 16384" >> /etc/security/limits.conf echo "oracle soft nofile 10240" >> /etc/security/limits.conf echo "oracle hard nofile 65536" >> /etc/security/limits.conf echo "oracle soft stack 10240" >> /etc/security/limits.conf echo "oracle hard stack 32768" >> /etc/security/limits.conf echo "* soft memlock 104857600" >> /etc/security/limits.conf echo "* hard memlock 104857600" >> /etc/security/limits.conf6 编辑操作系统核心参数echo "fs.aio-max-nr = 1048576" >> /etc/sysctl.conf #configure linux kernel aio echo "fs.file-max = 6815744" >> /etc/sysctl.conf echo "kernel.shmall = 2097152" >> /etc/sysctl.conf #total share memory segment in pages echo "kernel.shmmax = 536870912" >> /etc/sysctl.conf #size in bytes recommend more than half of total memory echo "kernel.shmmni = 4096" >> /etc/sysctl.conf echo "kernel.sem = 250 32000 100 128" >> /etc/sysctl.conf echo "net.ipv4.ip_local_port_range = 1024 65000" >> /etc/sysctl.conf echo "net.core.rmem_default=262144" >> /etc/sysctl.conf echo "net.core.rmem_max=262144" >> /etc/sysctl.conf echo "net.core.wmem_default=262144" >> /etc/sysctl.conf echo "net.core.wmem_max=262144" >> /etc/sysctl.conf编辑完了使用sysctl -p 命令使参数生效。7 编辑/etc/pam.d/login64位操作系统需要编辑这个文件echo "session required /lib/security/pam_limits.so" >> /etc/pam.d/login echo "session required pam_limits.so" >> /etc/pam.d/login8 安装依赖包yum install gcc gcc-c++ make binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libaio libaio-devel libgcc libstdc++ libstdc++-devel unixODBC unixODBC-devel安装完之后检验以下所需的依赖包是否都已安装rpm -q gcc gcc-c++ make binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libaio libaio-devel libgcc libstdc++ libstdc++-devel unixODBC unixODBC-devel | grep 'not installed'上面这个命令的输出结果应该是空的。9 安装数据库软件9.1 准备响应文件 在解压目录的response目录下有三个响应文件模板 [root@my_ob database]# ls -l response total 80 -rwxrwxrwx 1 root root 44533 Aug 27 2013 dbca.rsp -rwxr-xr-x 1 oracle oinstall 25116 Sep 6 16:35 db_install.rsp -rwxrwxrwx 1 root root 5871 Aug 27 2013 netca.rsp db_install.rsp是数据库软件安装的响应文件模板,可以根据自己的需要编辑后用户数据库软件的静默安装。本次安装的用的响应文件如下: oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v11_2_0 oracle.install.option=INSTALL_DB_SWONLY ORACLE_HOSTNAME=my_ob UNIX_GROUP_NAME=oinstall INVENTORY_LOCATION=/u01/app/oraInventory SELECTED_LANGUAGES=en,zh_CN ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 ORACLE_BASE=/u01/app/oracle oracle.install.db.InstallEdition=EE oracle.install.db.EEOptionsSelection=false oracle.install.db.DBA_GROUP=dba oracle.install.db.OPER_GROUP=dba DECLINE_SECURITY_UPDATES=true oracle.installer.autoupdates.option=SKIP_UPDATES这个响应文件只安装软件,不创建数据库,(oracle.install.option=INSTALL_DB_SWONLY),不进行安全更新和自动更新。9.2 运行runInstaller命令进行安装以下命令如无提示,都在oracle用户下执行。[oracle@my_ob database]$ ./runInstaller -silent -noconfig -ignorePrereq -responseFile /tmp/database/db_install.rsp Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 13838 MB Passed Checking swap space: must be greater than 150 MB. Actual 6143 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2022-09-06_04-52-51PM. Please wait ...[oracle@my_ob database]$ You can find the log of this install session at: /u01/app/oraInventory/logs/installActions2022-09-06_04-52-51PM.log The installation of Oracle Database 11g was successful. Please check '/u01/app/oraInventory/logs/silentInstall2022-09-06_04-52-51PM.log' for more details. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/oracle/product/11.2.0/db_1/root.sh Successfully Setup Software.9.3 运行脚本 根据提示以root身份运行2个脚本。[root@my_ob tmp]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@my_ob tmp]# /u01/app/oracle/product/11.2.0/db_1/root.sh Check /u01/app/oracle/product/11.2.0/db_1/install/root_my_ob_2022-09-06_16-55-45.log for the output of root script10 创建监听 监听的创建使用监听响应文件模板即可,不用做任何修改,将创建一个名字为Listener的监听。oracle@my_ob database]$ netca /silent -responsefile /tmp/database/netca.rsp Parsing command line arguments: Parameter "silent" = true Parameter "responsefile" = /tmp/database/netca.rsp Done parsing command line arguments. Oracle Net Services Configuration: Profile configuration complete. Oracle Net Listener Startup: Running Listener Control: /u01/app/oracle/product/11.2.0/db_1/bin/lsnrctl start LISTENER Listener Control complete. Listener started successfully. Listener configuration complete. Oracle Net Services configuration successful. The exit code is 011 创建数据库 11.1 准备响应文件 拷贝编辑dbca.rsp,这个响应文件有非常详细的说明,大部分选项都是非强制的,都有默认值,本次安装的响应文件如下:[GENERAL] RESPONSEFILE_VERSION = "11.2.0" OPERATION_TYPE = "createDatabase" [CREATEDATABASE] GDBNAME = "orcl11g.us.oracle.com" SID = "orcl11g" TEMPLATENAME = "General_Purpose.dbc" SYSPASSWORD = sys SYSTEMPASSWORD = system CHARACTERSET = ZHS16GBK NATIONALCHARACTERSET= AL16UTF1611.2 创建数据库[oracle@my_ob ~]$ dbca -silent -responseFile /tmp/database/dbca.rsp Copying database files 1% complete 3% complete 11% complete 18% complete 26% complete 37% complete Creating and starting Oracle instance 40% complete 45% complete 50% complete 55% complete 56% complete 60% complete 62% complete Completing Database Creation 66% complete 70% complete 73% complete 85% complete 96% complete 100% complete Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/orcl11g/orcl11g.log" for further details.11.3 登录数据库,检查数据文件位置SQL> select file_name, tablespace_name from dba_data_files FILE_NAME TABLESPACE_NAME ------------------------------------------------ ------------------ /u01/app/oracle/oradata/orcl11g/users01.dbf USERS /u01/app/oracle/oradata/orcl11g/undotbs01.dbf UNDOTBS1 /u01/app/oracle/oradata/orcl11g/sysaux01.dbf SYSAUX /u01/app/oracle/oradata/orcl11g/system01.dbf SYSTEM默认的数据文件建立在$ORACLE_BASE/oradata目录下。

MySQL---决策支持的基本测试标准TPC-DS测试数据的生成及导入

1 TPC-DS和TPC-H的区别 说起数据库测试基准,第一个想到的tpc-c,tpc-c常常被用于在线事务处理(OLTP)数据库的性能测试,比如linux上常用的sysbench测试工具就支持oltp测试,开源工具sysbench-tpcc就基于sysbench的tpcc测试工具。 oltp的测试基准是tpcc,olap(在线分析处理)的测试基准呢?广为人知的是tpc-H,TPC-H面向商品零售业,它定义了8张表,22个查询,遵循SQL92标准,它的表结构同oltp的表架构比较接近。 另一个不太为人知的标准就是本文要提到的TPC-DS,这个标准表结构是典型的数据仓库的表结构,采用星型、雪花型等多维数据模式。它包含7张事实表,17张纬度表,跟大数据的分析挖掘应用非常类似。对于数据仓库的初学者来说,通过实际的例子来学习相对起来容易一些,这个数据集是不错的选择。2 TPC-DS的下载和编译 下载连接在这个位置 https://www.tpc.org/tpc_documents_current_versions/current_specifications5.asp 在这个页面填入信息后,点击下载,可能会出现下面这个错误 这个错误并不是输入有误,用必应搜索一下得知可以在火狐浏览器上安装gooreplacer插件来解决,插件的地址是:https://addons.mozilla.org/zh-CN/firefox/addon/gooreplacer/ 下载插件安装后,设置一下这个插件: 设置完之后重启一下浏览器,打开TPC-DS的下载连接,输入信息,点击下载后。输入的电子邮件地址会收到下载连接。将这个连接复制粘贴到浏览器就可以下载了。 这个工具的编译十分简单,进入/usr/local/tpcds/tools目录下运行make命令即可。 [root@ tools]# pwd /usr/local/tpcds/tools [root@ tools]# make3 准备MySQL数据库 MySQL数据库的准备也不复杂,创建一个数据库,在数据库内创建表就可以了。3.1 创建并切换到数据库mysql> create database tpcds DEFAULT CHARSET utf8 COLLATE utf8_general_ci; Query OK, 1 row affected (0.00 sec) mysql> use tpcds; Database changed3.2 创建表 建表脚本比较长,就不在这里粘贴了,建表后的结果如下(一共是25个表):mysql> show tables; +------------------------+ | Tables_in_tpcds | +------------------------+ | call_center | | catalog_page | | catalog_returns | | catalog_sales | | customer | | customer_address | | customer_demographics | | date_dim | | dbgen_version | | household_demographics | | income_band | | inventory | | item | | promotion | | reason | | ship_mode | | store | | store_returns | | store_sales | | time_dim | | warehouse | | web_page | | web_returns | | web_sales | | web_site | +------------------------+ 25 rows in set (0.00 sec)4 生成测试数据 生成测试数据之前先要创建测试数据存放的目录[root@ tools]# mkdir -p /tmp/tpcds_data 运行生成测试数据的命令[root@ tools]# ./dsdgen -DIR /tmp/tpcds_data -SCALE 1 -TERMINATE N, tpcds.sql dsdgen Population Generator (Version 3.2.0) Warning: This scale factor is valid for QUALIFICATION ONLY -SCALE 参数指定数据的大小,以G为单位。5 生成导入数据的脚本 tpc-ds生成的测试数据可以用load命令导入MySQL数据库,可以用文本编辑器编辑,也可以用shell脚本生成,这里给出一个shell脚本,只需要调整数据文件所在的位置就可以生成25个表的导入脚本。[root@ tpcds_data]# for file in `ls -l |awk '{print $9}'`; > do > echo "LOAD DATA INFILE '/tmp/tpcds_data/"$file"' INTO TABLE" ${file%%.*} " FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n';"; > done 脚本是一个for循环,ls -l |awk '{print $9}'打印出来是当前目录下的文件名,文件名是带有扩展名的,针对每一取到的文件名,利用linux字符串功能拼接成load语句,${file%%.*}截取文件名中'.'左边的部分,这时是花括号,不是括号。这个脚本的输出结果如下:LOAD DATA INFILE '/tmp/tpcds_data/call_center.dat' INTO TABLE call_center FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/catalog_page.dat' INTO TABLE catalog_page FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/catalog_returns.dat' INTO TABLE catalog_returns FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/catalog_sales.dat' INTO TABLE catalog_sales FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/customer_address.dat' INTO TABLE customer_address FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/customer.dat' INTO TABLE customer FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/customer_demographics.dat' INTO TABLE customer_demographics FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/date_dim.dat' INTO TABLE date_dim FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/dbgen_version.dat' INTO TABLE dbgen_version FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/household_demographics.dat' INTO TABLE household_demographics FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/income_band.dat' INTO TABLE income_band FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/inventory.dat' INTO TABLE inventory FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/item.dat' INTO TABLE item FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/promotion.dat' INTO TABLE promotion FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/reason.dat' INTO TABLE reason FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/ship_mode.dat' INTO TABLE ship_mode FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/store.dat' INTO TABLE store FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/store_returns.dat' INTO TABLE store_returns FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/store_sales.dat' INTO TABLE store_sales FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/time_dim.dat' INTO TABLE time_dim FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/warehouse.dat' INTO TABLE warehouse FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/web_page.dat' INTO TABLE web_page FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/web_returns.dat' INTO TABLE web_returns FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/web_sales.dat' INTO TABLE web_sales FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; LOAD DATA INFILE '/tmp/tpcds_data/web_site.dat' INTO TABLE web_site FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n';6 导入数据 登录MySQL数据库,切换到tcpds数据库下,粘贴生成的脚本就可以导入数据了。导入时往往报下面这个错误。mysql> LOAD DATA INFILE '/tmp/tpcds_data/call_center.dat' INTO TABLE call_center FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot execute this statement出现这个错误的原因是secure_file_priv设置的问题,这个变量默认的设置为空mysql> show variables like '%secure_file_priv%'; +------------------+-------+ | Variable_name | Value | +------------------+-------+ | secure_file_priv | NULL | +------------------+-------+ 1 row in set (0.00 sec)需要将它设置为存放要导入文件的目录mysql> set secure_file_priv="/tmp/tpcds_data/"; ERROR 1238 (HY000): Variable 'secure_file_priv' is a read only variable这个变量是只读变量,只能在数据库启动之前设置,关闭MySQL数据库,重启后带上这个选项[root@ tpcds_data]# mysqld_safe --user=mysql --datadir=/mysqldata --secure_file_priv="/tmp/tpcds_data/"& [1] 60061 [root@ tpcds_data]# 2022-08-30T07:10:08.899244Z mysqld_safe Logging to '/mysqldata/iZ2ze0t8khaprrpfvmevjiZ.err'. 2022-08-30T07:10:08.926603Z mysqld_safe Starting mysqld daemon with databases from /mysqldata再次登录数据库就可以导入了。mysql> LOAD DATA INFILE '/tmp/tpcds_data/call_center.dat' INTO TABLE call_center FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; ERROR 1292 (22007): Incorrect date value: '' for column 'cc_rec_end_date' at row 1这个错误可以通过设置sql模式来消除mysql> set sql_mode ='NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'; Query OK, 0 rows affected (0.00 sec)再次导入mysql> LOAD DATA INFILE '/tmp/tpcds_data/call_center.dat' INTO TABLE call_center FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; Query OK, 6 rows affected, 9 warnings (0.00 sec) Records: 6 Deleted: 0 Skipped: 0 Warnings: 97 生成查询# ./dsqgen -input ../query_templates/templates.lst -directory ../query_templates -output ./sql.ansi/ -DIALECT ansi -LOG ./sql.ansi/ansi.log-directory是查询模板所在的目录,-DIALECT设置sql语言的版本 生成查询的过程中也有可能报下面这个错误qgen2 Query Generator (Version 3.2.0) Warning: This scale factor is valid for QUALIFICATION ONLY ERROR: Substitution'_END' is used before being initialized at line 63 in ../query_templates/query1.tpl需要处理一下查询模板文件,在每个文件的末尾加上一行[root@ query_templates]# for i in `ls query*tpl` > do > echo $i; > echo "define _END = \"\";" >> $i > done生成查询之间仍然需要创建查询的存储目录[root@ tools]# mkdir sql.ansi生成查询[root@ tools]# ./dsqgen -input ../query_templates/templates.lst -directory ../query_templates -output ./sql.ansi/ -DIALECT ansi -LOG ./sql.ansi/ansi.log qgen2 Query Generator (Version 3.2.0) Warning: This scale factor is valid for QUALIFICATION ONLY8 运行查询找一个查询语句运行一下select count(*) from store_sales ,household_demographics ,time_dim, store where ss_sold_time_sk = time_dim.t_time_sk and ss_hdemo_sk = household_demographics.hd_demo_sk and ss_store_sk = s_store_sk and time_dim.t_hour = 8 and time_dim.t_minute >= 30 and household_demographics.hd_dep_count = 5 and store.s_store_name = 'ese' order by count(*) ;看一下这条语句的执行计划mysql> explain select count(*) from store_sales ,household_demographics ,time_dim, store where ss_sold_time_sk = time_dim.t_time_sk and ss_hdemo_sk = household_demographics.hd_demo_sk and ss_store_sk = s_store_sk and time_dim.t_hour = 8 and time_dim.t_minute >= 30 and household_demographics.hd_dep_count = 5 and store.s_store_name = 'ese' order by count(*); +----+-------------+------------------------+------------+--------+---------------+---------+---------+-----------------------------------+---------+----------+----------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------------------+------------+--------+---------------+---------+---------+-----------------------------------+---------+----------+----------------------------------------------------+ | 1 | SIMPLE | store | NULL | ALL | PRIMARY | NULL | NULL | NULL | 12 | 10.00 | Using where | | 1 | SIMPLE | store_sales | NULL | ALL | NULL | NULL | NULL | NULL | 3754283 | 10.00 | Using where; Using join buffer (Block Nested Loop) | | 1 | SIMPLE | household_demographics | NULL | eq_ref | PRIMARY | PRIMARY | 4 | tpcds.store_sales.ss_hdemo_sk | 1 | 10.00 | Using where | | 1 | SIMPLE | time_dim | NULL | eq_ref | PRIMARY | PRIMARY | 4 | tpcds.store_sales.ss_sold_time_sk | 1 | 5.00 | Using where | +----+-------------+------------------------+------------+--------+---------------+---------+---------+-----------------------------------+---------+----------+----------------------------------------------------+ 4 rows in set, 1 warning (0.00 sec)

Oracle Weblogic--CentOS Linux 8.5.2安装weblogic11G

1 java的安装下载java 1.6版本jdk,上传到服务器上./jdk-6u45-linux-x64.bin 上面的命令实际是解压jdk包至当前目录下,将解压的目录移动至/usr/local目录下,编辑root用户home目录下的.bash_profile文件,加入java相关内容,编辑后的.bash_profile文件如下。 [root@ ~]# cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc # User specific environment and startup programs export PS1='[\u@ \W]\$ ' export JAVA_HOME=/usr/local/jdk1.6.0_45 PATH=$PATH:$HOME/bin #export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH export PATH=$PATH:/usr/local/mysql/bin:$JAVA_HOME/bin编辑完后,运行一个这个脚本文件,使更改在当前会话生效。[root@ ~]# source .bash_profile [root@ ~]#看一下当前的java版本[root@ ~]# java -version java version "1.6.0_45" Java(TM) SE Runtime Environment (build 1.6.0_45-b06) Java HotSpot(TM) 64-Bit Server VM (build 20.45-b01, mixed mode)2 安装weblogic下载wls1036_generic.jar上传到服务器,使用java命令运行这个包安装weblogic[root@ ~]# java -jar wls1036_generic.jar进入安装向导,依据向导进行安装。Unable to instantiate GUI, defaulting to console mode. Extracting 0%....................................................................................................100% <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Welcome: -------- This installer will guide you through the installation of WebLogic 10.3.6.0. Type "Next" or enter to proceed to the next prompt. If you want to change data entered previously, type "Previous". You may quit the installer at any time by typing "Exit". Enter [Exit][Next]>按回车键继续安装Enter new Middleware Home OR [Exit][Previous][Next]> /usr/local/webogic/输入weblogic的目标安装目录,按enter键继续<-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Choose Middleware Home Directory: --------------------------------- "Middleware Home" = [/usr/local/webogic] Use above value or select another option: 1 - Enter new Middleware Home 2 - Change to default [/root/Oracle/Middleware] Enter option number to select OR [Exit][Previous][Next]>使用当前安装目录,按回车继续Register for Security Updates: ------------------------------ Provide your email address for security updates and to initiate configuration manager. 1|Email:[] 2|Support Password:[] 3|Receive Security Update:[Yes] Enter index number to select OR [Exit][Previous][Next]>3键入3,回车Provide your email address for security updates and to initiate configuration manager. "Receive Security Update:" = [Enter new value or use default "Yes"] Enter [Yes][No]? No输入No回车<-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Register for Security Updates: ------------------------------ Provide your email address for security updates and to initiate configuration manager. "Receive Security Update:" = [Enter new value or use default "Yes"] ** Do you wish to bypass initiation of the configuration manager and ** remain uninformed of critical security issues in your configuration? Enter [Yes][No]? Yes输入yes回车 <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Choose Install Type: -------------------- Select the type of installation you wish to perform. ->1|Typical | Install the following product(s) and component(s): | - WebLogic Server | - Oracle Coherence 2|Custom | Choose software products and components to install and perform optional |configuration. Enter index number to select OR [Exit][Previous][Next]> 1选择典型安装,键入1回车 <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> JDK Selection (Any * indicates Oracle Supplied VM): --------------------------------------------------- JDK(s) chosen will be installed. Defaults will be used in script string-substitution if installed. 1|Add Local Jdk 2|/usr/local/jdk1.6.0_45[x] *Estimated size of installation: 690.2 MB Enter 1 to add or >= 2 to toggle selection OR [Exit][Previous][Next]>这里如果jdk版本同weblogic不兼容,第2条不会出现,如果手动输入jdk的目录,会出现下面错误<-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> JDK Selection (Any * indicates Oracle Supplied VM): --------------------------------------------------- JDK(s) chosen will be installed. Defaults will be used in script string-substitution if installed. 1|Add Local Jdk ** Invalid input, only integer selection or page movement command are ** accepted: /usr/local/jdk-16.0.1 *Estimated size of installation: 690.2 MBweblogic兼容版本使1.6以上,不过如果jdk版本太高了,比如16,17 之类的,weblogic也不会兼容。如果没有兼容性的问题,可以看到第二条最右边有个x,直接回车继续安装就可以了。 <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Choose Product Installation Directories: ---------------------------------------- Middleware Home Directory: [/usr/local/webogic] Product Installation Directories: 1|WebLogic Server: [/usr/local/webogic/wlserver_10.3] 2|Oracle Coherence: [/usr/local/webogic/coherence_3.7] Enter index number to select OR [Exit][Previous][Next]>直接回车进入下一步-------------------------------------------------- WebLogic Platform 10.3.6.0 |_____WebLogic Server | |_____Core Application Server | |_____Administration Console | |_____Configuration Wizard and Upgrade Framework | |_____Web 2.0 HTTP Pub-Sub Server | |_____WebLogic SCA | |_____WebLogic JDBC Drivers | |_____Third Party JDBC Drivers | |_____WebLogic Server Clients | |_____WebLogic Web Server Plugins | |_____UDDI and Xquery Support | |_____Evaluation Database |_____Oracle Coherence |_____Coherence Product Files *Estimated size of installation: 690.3 MB Enter [Exit][Previous][Next]>回车 继续 Sep 1, 2022 2:24:07 PM java.util.prefs.FileSystemPreferences$2 run INFO: Created user preferences directory. <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Installing files.. 0% 25% 50% 75% 100% [------------|------------|------------|------------] [***************************************************] Performing String Substitutions... <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Configuring OCM... 0% 25% 50% 75% 100% [------------|------------|------------|------------] [***************************************************] Creating Domains... <-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Installation Complete Congratulations! Installation is complete. Press [Enter] to continue or type [Exit]>安装完成,回车退出<-------------------- Oracle Installer - WebLogic 10.3.6.0 -------------------> Clean up process in progress ... 3 配置weblogic域进入下面这个目录[root@ bin]# pwd /usr/local/webogic/wlserver_10.3/common/bin运行配置脚本[root@ ~]# ./config.sh 根据向导提示进行安装Unable to instantiate GUI, defaulting to console mode. <------------------- Fusion Middleware Configuration Wizard ------------------> Welcome: -------- Choose between creating and extending a domain. Based on your selection, the Configuration Wizard guides you through the steps to generate a new or extend an existing domain. ->1|Create a new WebLogic domain | Create a WebLogic domain in your projects directory. 2|Extend an existing WebLogic domain | Use this option to add new components to an existing domain and modify |configuration settings. Enter index number to select OR [Exit][Next]> 1选择1,创建一个新域,回车继续 <------------------- Fusion Middleware Configuration Wizard ------------------> Select Domain Source: --------------------- Select the source from which the domain will be created. You can create the domain by selecting from the required components or by selecting from a list of existing domain templates. ->1|Choose Weblogic Platform components | You can choose the Weblogic component(s) that you want supported in |your domain. 2|Choose custom template | Choose this option if you want to use an existing template. This |could be a custom created template using the Template Builder. Enter index number to select OR [Exit][Previous][Next]>1这里没有定制模板可用,选择1 ,回车继续<------------------- Fusion Middleware Configuration Wizard ------------------> Application Template Selection: ------------------------------- Available Templates |_____Basic WebLogic Server Domain - 10.3.6.0 [wlserver_10.3]x |_____Basic WebLogic SIP Server Domain - 10.3.6.0 [wlserver_10.3] [2] |_____WebLogic Advanced Web Services for JAX-RPC Extension - 10.3.6.0 [wlserver_10.3] [3] |_____WebLogic Advanced Web Services for JAX-WS Extension - 10.3.6.0 [wlserver_10.3] [4] Enter number exactly as it appears in brackets to toggle selection OR [Exit][Previous][Next]>默认已选择第一项,回车继续即可<------------------- Fusion Middleware Configuration Wizard ------------------> Edit Domain Information: ------------------------ | Name | Value | _|________|_____________| 1| *Name: | base_domain | Enter value for "Name" OR [Exit][Previous][Next]>可以输入自己的域名,如果想使用默认域名,回车继续<------------------- Fusion Middleware Configuration Wizard ------------------> Select the target domain directory for this domain: --------------------------------------------------- "Target Location" = [Enter new value or use default "/usr/local/webogic/user_projects/domains"] Enter new Target Location OR [Exit][Previous][Next]>使用默认目录,回车继续<------------------- Fusion Middleware Configuration Wizard ------------------> Configure Administrator User Name and Password: ----------------------------------------------- Create a user to be assigned to the Administrator role. This user is the default administrator used to start development mode servers. | Name | Value | _|_________________________|_________________________________________| 1| *Name: | weblogic | 2| *User password: | | 3| *Confirm user password: | | 4| Description: | This user is the default administrator. | Use above value or select another option: 1 - Modify "Name" 2 - Modify "User password" 3 - Modify "Confirm user password" 4 - Modify "Description" Enter option number to select OR [Exit][Previous][Next]>2输入2,调整密码<------------------- Fusion Middleware Configuration Wizard ------------------> Configure Administrator User Name and Password: ----------------------------------------------- Create a user to be assigned to the Administrator role. This user is the default administrator used to start development mode servers. "*User password:" = [] Enter new *User password: OR [Exit][Reset][Accept]> weblogic0输入密码,回车<------------------- Fusion Middleware Configuration Wizard ------------------> Configure Administrator User Name and Password: ----------------------------------------------- Create a user to be assigned to the Administrator role. This user is the default administrator used to start development mode servers. | Name | Value | _|_________________________|_________________________________________| 1| *Name: | weblogic | 2| *User password: | ******** | 3| *Confirm user password: | | 4| Description: | This user is the default administrator. | Use above value or select another option: 1 - Modify "Name" 2 - Modify "User password" 3 - Modify "Confirm user password" 4 - Modify "Description" 5 - Discard Changes Enter option number to select OR [Exit][Previous][Next]> 3输入3 ,调整确认密码<------------------- Fusion Middleware Configuration Wizard ------------------> Configure Administrator User Name and Password: ----------------------------------------------- Create a user to be assigned to the Administrator role. This user is the default administrator used to start development mode servers. "*Confirm user password:" = [] Enter new *Confirm user password: OR [Exit][Reset][Accept]> weblogic0再次输入密码,回车<------------------- Fusion Middleware Configuration Wizard ------------------> Configure Administrator User Name and Password: ----------------------------------------------- Create a user to be assigned to the Administrator role. This user is the default administrator used to start development mode servers. | Name | Value | _|_________________________|_________________________________________| 1| *Name: | weblogic | 2| *User password: | ******** | 3| *Confirm user password: | ******** | 4| Description: | This user is the default administrator. | Use above value or select another option: 1 - Modify "Name" 2 - Modify "User password" 3 - Modify "Confirm user password" 4 - Modify "Description" 5 - Discard Changes Enter option number to select OR [Exit][Previous][Next]>回车继续<------------------- Fusion Middleware Configuration Wizard ------------------> Domain Mode Configuration: -------------------------- Enable Development or Production Mode for this domain. ->1|Development Mode 2|Production Mode Enter index number to select OR [Exit][Previous][Next]>2选择生产模式,回车<------------------- Fusion Middleware Configuration Wizard ------------------> Java SDK Selection: ------------------- ->1|Sun SDK 1.6.0_45 @ /usr/local/jdk1.6.0_45 2|Other Java SDK Enter index number to select OR [Exit][Previous][Next]>直接回车<------------------- Fusion Middleware Configuration Wizard ------------------> Select Optional Configuration: ------------------------------ 1|Administration Server [ ] 2|Managed Servers, Clusters and Machines [ ] 3|RDBMS Security Store [ ] Enter index number to select OR [Exit][Previous][Next]>选择1回车,也可以选择2配置weblogic集群<------------------- Fusion Middleware Configuration Wizard ------------------> Select Optional Configuration: ------------------------------ 1|Administration Server [x] 2|Managed Servers, Clusters and Machines [ ] 3|RDBMS Security Store [ ] Enter index number to select OR [Exit][Previous][Next]>第一项Administration Server已经选择,回车继续Configure the Administration Server: ------------------------------------ Each WebLogic Server domain must have one Administration Server. The Administration Server is used to perform administrative tasks. | Name | Value | _|__________________|_____________________| 1| *Name: | AdminServer | 2| *Listen address: | All Local Addresses | 3| Listen port: | 7001 | 4| SSL listen port: | N/A | 5| SSL enabled: | false | Use above value or select another option: 1 - Modify "Name" 2 - Modify "Listen address" 3 - Modify "Listen port" 4 - Modify "SSL enabled" Enter option number to select OR [Exit][Previous][Next]>直接回车<------------------- Fusion Middleware Configuration Wizard ------------------> Creating Domain... 0% 25% 50% 75% 100% [------------|------------|------------|------------] [***************************************************]安装在这里卡住了,解决的方案是进入/usr/local/jdk1.6.0_45/jre/lib/security目录,编辑java.security 文件,把其中一行改成这样[root@ security]# vi java.security [root@ security]# grep securerandom.source java.security # the securerandom.source property. If an exception occurs when securerandom.source=file:/dev/./urandom # Specifying this system property will override the securerandom.source这个文件编辑存盘之后,删去新建域文件夹,重新配置一遍,就可以完成配置了,最后的一屏显示如下。 <------------------- Fusion Middleware Configuration Wizard ------------------> Creating Domain... 0% 25% 50% 75% 100% [------------|------------|------------|------------] [***************************************************] **** Domain Created Successfully! ****4 启动并登录weblogic进入下面这个目录[root@ base_domain]# pwd /usr/local/webogic/user_projects/domains/base_domain运行weglogic启动脚本./startWebLogic.sh 打开浏览器,就可以登录weblogic了

实验报告: PolarDB MySQL HTAP:实时数据分析加速

1 设置PolarDB MySQL集群白名单,创建数据库及账号打开PolarDB MySQL控制台,找到创建的集群设置白名单,白名单内加入实验创建的ECS资源的公网ip创建普通账号创建数据库,绑定到刚才创建的账号2申请集群公网地址,开启行存/列存自动引流功能这个设置是RDS没有的,需要点击左侧的基本信息找到数据库代理企业通讯版,点击公网右侧的申请,申请一个公网地址。公网地址申请完后,点击编辑配置,配置行列存自动引流。这里的行存/列存自动引流需要开启,默认是关闭的。可以看到系统已经自动选择了主节点、只读节点和只读列存节点。我在做这个实验时,忽略了打开这个自动引流功能,发现前两个sql运行时只需要2秒左右,而在这里打开这个功能后,如果use_imci_engine变量值为off,运行的时间就在10秒以上,甚至接近20秒了。3 创建测试表,导入数据在DMS中创建测试表,登录DMS,复制表创建sql脚本,粘贴运行。运行之后,在表一栏中点击刷新图标即可看到新建的表。数据导入在ECS中进行,打开web终端,复制数据导入脚本,更改脚本中的xxxx为申请的集群公网地址,粘贴到终端内运行即可。4 bash /root/benchtpch/tpch/data_kit.sh --parallel 2 --base /usr -s 1 -c 64 --data /root/benchtpch/tpchdata1g --database tpch1g --ddl /root/benchtpch/tpch/columnar.ddl --host mypolar.rwlb.rds.aliyuncs.com --port 3306 -u test_user -p Password123 load 数据装载大概需要十分钟左右,看到下面的输出,就知道数据装载成功了。mysql: [Warning] Using a password on the command line interface can be insecure. Fri Aug 26 03:56:47 PM CST 2022 [INFO] Loaded data part 63 for tpch1g.lineitem from /root/benchtpch/tpchdata1g/lineitem.tbl.63 ... Fri Aug 26 03:56:48 PM CST 2022 [INFO] All 2 threads for tpch1g.lineitem finish Fri Aug 26 03:56:48 PM CST 2022 [INFO] Finish loading data for database tpch1g with 64 chunks in 2 threads4 测试行列存自动引流的效果测试之前需要登录数据库[root@iZuf6c5z7jjuumae5hhkr7Z ~]# /usr/bin/mysql --host mypolar.rwlb.rds.aliyuncs.com --port 3306 -utest_user -pPassword123 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 536874751 Server version: 8.0.13 Source distribution +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | use_imci_engine | ON | +-----------------+-------+ 1 row in set (0.01 sec)执行单表查询的sql语句mysql> select sum(l_extendedprice * l_discount) as revenue from lineitem where l_shipdate >= date '1994-01-01' and l_shipdate < date '1994-01-01' + interval '1' year and l_discount between .06 - 0.01 and .06 + 0.01 and l_quantity < 24; +----------------+ | revenue | +----------------+ | 123141078.2283 | +----------------+ 1 row in set (0.47 sec)执行用时0.47秒,看一下执行计划mysql> explain select sum(l_extendedprice * l_discount) as revenue from lineitem where l_shipdate >= date '1994-01-01' and l_shipdate < date '1994-01-01' + interval '1' year and l_discount between .06 - 0.01 and .06 + 0.01 and l_quantity < 24; +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | IMCI Execution Plan (max_dop = 2, max_query_mem = 428867584) | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Project | Exprs: temp_table1.SUM(lineitem.l_extendedprice * lineitem.l_discount) HashGroupby | OutputTable(1): temp_table1 | Grouping: None | Output Grouping: None | Aggrs: SUM(lineitem.l_extendedprice * lineitem.l_discount) CTableScan | InputTable(0): lineitem | Pred: ((lineitem.l_shipdate >= 01/01/1994 00:00:00.000000) AND (lineitem.l_shipdate < 01/01/1995 00:00:00.000000) AND (lineitem.l_quantity < 24.00) AND ( lineitem.l_discount BTW 0.05 AND 0.07 )) | +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.01 sec)使用了IMCI Execution Plan,先扫描后进行了HashGroupby。设置use_imci_engine为off后再看。mysql> set use_imci_engine = off; Query OK, 0 rows affected (0.00 sec) mysql> select -> sum(l_extendedprice * l_discount) as revenue -> from -> lineitem -> where -> l_shipdate >= date '1994-01-01' -> and l_shipdate < date '1994-01-01' + interval '1' year -> and l_discount between .06 - 0.01 and .06 + 0.01 -> and l_quantity < 24; +----------------+ | revenue | +----------------+ | 123141078.2283 | +----------------+ 1 row in set (19.61 sec) 执行用时为19.61秒,慢的不是一点半点。看一下执行计划mysql> explain select sum(l_extendedprice * l_discount) as revenue from lineitem where l_shipdate >= date '1994-01-01' and l_shipdate < date '1994-01-01' + interval '1' year and l_discount between .06 - 0.01 and .06 + 0.01 and l_quantity < 24; +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ | 1 | SIMPLE | lineitem | NULL | ALL | NULL | NULL | NULL | NULL | 5970318 | 0.41 | Using where | +----+-------------+----------+------------+------+---------------+------+---------+------+---------+----------+-------------+ 1 row in set, 1 warning (0.00 sec)做这个测试的时候遇到了以下小插曲,再设置行列存自动引流功能时,忽略了在集群设置时打开这个功能。在做这个测试时,第一次运行 时间为20秒左右,在第二次运行时执行用时就是2秒左右了。猜测是第二次运行时应为数据载到了缓冲区内,所以快了不少。而打开行列存自动引流功能之后,这条语句的执行在use_imci_engine为off时执行用时就不低于10s了。4.2 多表join查询mysql> set use_imci_engine = on; Query OK, 0 rows affected (0.00 sec) mysql> select -> l_orderkey, -> sum(l_extendedprice * (1 - l_discount)) as revenue, -> o_orderdate, -> o_shippriority -> from -> customer, -> orders, -> lineitem -> where -> c_mktsegment = 'BUILDING' -> and c_custkey = o_custkey -> and l_orderkey = o_orderkey -> and o_orderdate < date '1995-03-15' -> and l_shipdate > date '1995-03-15' -> group by -> l_orderkey, -> o_orderdate, -> o_shippriority -> order by -> revenue desc, -> o_orderdate -> limit 10; +------------+-------------+-------------+----------------+ | l_orderkey | revenue | o_orderdate | o_shippriority | +------------+-------------+-------------+----------------+ | 2456423 | 406181.0111 | 1995-03-05 | 0 | | 3459808 | 405838.6989 | 1995-03-04 | 0 | | 492164 | 390324.0610 | 1995-02-19 | 0 | | 1188320 | 384537.9359 | 1995-03-09 | 0 | | 2435712 | 378673.0558 | 1995-02-26 | 0 | | 4878020 | 378376.7952 | 1995-03-12 | 0 | | 5521732 | 375153.9215 | 1995-03-13 | 0 | | 2628192 | 373133.3094 | 1995-02-22 | 0 | | 993600 | 371407.4595 | 1995-03-05 | 0 | | 2300070 | 367371.1452 | 1995-03-13 | 0 | +------------+-------------+-------------+----------------+ 10 rows in set (0.75 sec)use_imci_engine设置为on,执行时间为0.75秒,执行计划如下:mysql> explain select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'BUILDING' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '1995-03-15' and l_shipdate > date '1995-03-15' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10\G; *************************** 1. row *************************** IMCI Execution Plan (max_dop = 2, max_query_mem = 428867584): Project | Exprs: temp_table4.lineitem.l_orderkey, temp_table4.SUM(lineitem.l_extendedprice * 1.00 - lineitem.l_discount), temp_table4.orders.o_orderdate, temp_table4.orders.o_shippriority TopK | Limit = 10 | Exprs: temp_table4.SUM(lineitem.l_extendedprice * 1.00 - lineitem.l_discount) DESC,temp_table4.orders.o_orderdate ASC HashGroupby | OutputTable(4): temp_table4 | Grouping: lineitem.l_orderkey orders.o_orderdate orders.o_shippriority | Output Grouping: lineitem.l_orderkey, orders.o_orderdate, orders.o_shippriority | Aggrs: SUM(lineitem.l_extendedprice * 1.00 - lineitem.l_discount) HashJoin | HashMode: DYNAMIC | JoinMode: INNER | JoinPred: orders.o_orderkey = lineitem.l_orderkey HashJoin | HashMode: DYNAMIC | JoinMode: INNER | JoinPred: orders.o_custkey = customer.c_custkey CTableScan | InputTable(0): orders | Pred: (orders.o_orderdate < 03/15/1995 00:00:00.000000) CTableScan | InputTable(1): customer | Pred: (customer.c_mktsegment = "BUILDING") CTableScan | InputTable(2): lineitem | Pred: (lineitem.l_shipdate > 03/15/1995 00:00:00.000000) 1 row in set (0.03 sec)从执行计划来看,执行的hashjoin,看一下use_imci_engine为off时的执行时间mysql> select -> l_orderkey, -> sum(l_extendedprice * (1 - l_discount)) as revenue, -> o_orderdate, -> o_shippriority -> from -> customer, -> orders, -> lineitem -> where -> c_mktsegment = 'BUILDING' -> and c_custkey = o_custkey -> and l_orderkey = o_orderkey -> and o_orderdate < date '1995-03-15' -> and l_shipdate > date '1995-03-15' -> group by -> l_orderkey, -> o_orderdate, -> o_shippriority -> order by -> revenue desc, -> o_orderdate -> limit 10; +------------+-------------+-------------+----------------+ | l_orderkey | revenue | o_orderdate | o_shippriority | +------------+-------------+-------------+----------------+ | 2456423 | 406181.0111 | 1995-03-05 | 0 | | 3459808 | 405838.6989 | 1995-03-04 | 0 | | 492164 | 390324.0610 | 1995-02-19 | 0 | | 1188320 | 384537.9359 | 1995-03-09 | 0 | | 2435712 | 378673.0558 | 1995-02-26 | 0 | | 4878020 | 378376.7952 | 1995-03-12 | 0 | | 5521732 | 375153.9215 | 1995-03-13 | 0 | | 2628192 | 373133.3094 | 1995-02-22 | 0 | | 993600 | 371407.4595 | 1995-03-05 | 0 | | 2300070 | 367371.1452 | 1995-03-13 | 0 | +------------+-------------+-------------+----------------+ 10 rows in set (19.69 sec)执行时间为19.69秒,执行计划如下:mysql> explain select l_orderkey, sum(l_extendedprice * (1 - l_discount)) as revenue, o_orderdate, o_shippriority from customer, orders, lineitem where c_mktsegment = 'BUILDING' and c_custkey = o_custkey and l_orderkey = o_orderkey and o_orderdate < date '1995-03-15' and l_shipdate > date '1995-03-15' group by l_orderkey, o_orderdate, o_shippriority order by revenue desc, o_orderdate limit 10; +----+-------------+----------+------------+--------+---------------+---------+---------+--------------------------+---------+----------+----------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+--------+---------------+---------+---------+--------------------------+---------+----------+----------------------------------------------+ | 1 | SIMPLE | orders | NULL | ALL | PRIMARY | NULL | NULL | NULL | 1490641 | 33.33 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | customer | NULL | eq_ref | PRIMARY | PRIMARY | 8 | tpch1g.orders.o_custkey | 1 | 10.00 | Using where | | 1 | SIMPLE | lineitem | NULL | ref | PRIMARY | PRIMARY | 8 | tpch1g.orders.o_orderkey | 4 | 33.33 | Using where | +----+-------------+----------+------------+--------+---------------+---------+---------+--------------------------+---------+----------+----------------------------------------------+ 3 rows in set, 1 warning (0.00 sec)从执行计划来看,执行的是MySQL的嵌套join,执行时间长了不少。4.3 点查mysql> set use_imci_engine = on; Query OK, 0 rows affected (0.00 sec) mysql> select * from lineitem where l_orderkey = 1; +------------+-----------+-----------+--------------+------------+-----------------+------------+-------+--------------+--------------+------------+--------------+---------------+-------------------+------------+------------------------------------+ | l_orderkey | l_partkey | l_suppkey | l_linenumber | l_quantity | l_extendedprice | l_discount | l_tax | l_returnflag | l_linestatus | l_shipdate | l_commitdate | l_receiptdate | l_shipinstruct | l_shipmode | l_comment | +------------+-----------+-----------+--------------+------------+-----------------+------------+-------+--------------+--------------+------------+--------------+---------------+-------------------+------------+------------------------------------+ | 1 | 155190 | 7706 | 1 | 17.00 | 21168.23 | 0.04 | 0.02 | N | O | 1996-03-13 | 1996-02-12 | 1996-03-22 | DELIVER IN PERSON | TRUCK | egular courts above the | | 1 | 67310 | 7311 | 2 | 36.00 | 45983.16 | 0.09 | 0.06 | N | O | 1996-04-12 | 1996-02-28 | 1996-04-20 | TAKE BACK RETURN | MAIL | ly final dependencies: slyly bold | | 1 | 63700 | 3701 | 3 | 8.00 | 13309.60 | 0.10 | 0.02 | N | O | 1996-01-29 | 1996-03-05 | 1996-01-31 | TAKE BACK RETURN | REG AIR | riously. regular, express dep | | 1 | 2132 | 4633 | 4 | 28.00 | 28955.64 | 0.09 | 0.06 | N | O | 1996-04-21 | 1996-03-30 | 1996-05-16 | NONE | AIR | lites. fluffily even de | | 1 | 24027 | 1534 | 5 | 24.00 | 22824.48 | 0.10 | 0.04 | N | O | 1996-03-30 | 1996-03-14 | 1996-04-01 | NONE | FOB | pending foxes. slyly re | | 1 | 15635 | 638 | 6 | 32.00 | 49620.16 | 0.07 | 0.02 | N | O | 1996-01-30 | 1996-02-07 | 1996-02-03 | DELIVER IN PERSON | MAIL | arefully slyly ex | +------------+-----------+-----------+--------------+------------+-----------------+------------+-------+--------------+--------------+------------+--------------+---------------+-------------------+------------+------------------------------------+ 6 rows in set (0.00 sec) mysql> explain select * from lineitem where l_orderkey = 1; +----+-------------+----------+------------+------+---------------+---------+---------+-------+------+----------+-------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+----------+------------+------+---------------+---------+---------+-------+------+----------+-------+ | 1 | SIMPLE | lineitem | NULL | ref | PRIMARY | PRIMARY | 8 | const | 6 | 100.00 | NULL | +----+-------------+----------+------------+------+---------------+---------+---------+-------+------+----------+-------+ 1 row in set, 1 warning (0.01 sec)在use_imci_engine设置为on时,点查仍然走的是MySQL的执行计划,不是IMCI Execution Plan。5 小结 PolarDB myslq的启行存/列存自动引流把分析性的查询分流到只读列存实例上,把OLTP类的查询引流到主节点或者是只读节点。列存节点的存储方式更适合OLAP类查询,同时也能执行hash连接、合并排序连接等更适合分析性查询的多表连接方式,也支持并行执行,分析性的查询性能会有大幅度的提升。

Mac mini--使用外置SSD启动系统

随着MAC OS的不断升级,原来使用机械硬盘的mac mini运行速度越来越慢,系统启动时悦耳动听的机械磁盘的转动的哒哒声听起来也越来越烦。这时,就想能不能给这玩意儿换一个固态盘。 网上百度一下吧,要个mac mini加一个固态盘需要打开机箱,可这玩意的外壳十分紧密,苹果公司是不让用户打开机箱的,自己干这活还真怕万一不慎把这玩意儿彻底弄成废品。再百度一下,发现现在很多usb 3.0的外置固态盘,速度也很快,用它做系统盘应该也可以吧,百度搜一下,这方面的文章也有,看来可行。也弄一个试试吧,固态盘大概的价格是1G人民币一元,从京东上买来一个coolfish的固态盘开始干活。1 连接外置固态盘至mac mini 找一个USB口连接上,开机或关机都可以,开机时固态盘只要一连接上,桌面上就会出现一个固态盘的图标,打开可以看到里面的内容。2 格式化外置固态盘打开磁盘工具,点击连接上的固态盘点击左边磁盘工具显示上面的倒三角,选择显示所有设备,点击左边的固态盘,点击右上角的抹掉,格式选择APFS,方案选择guid分区图,这时MAC OS启动盘要求的分区格式,其它格式的而磁盘不能作为MAC OS的启动盘。确定后格式化固态盘。格式化完之后固态盘的分区如下图所示:3 安装系统 在固态盘上安装MAC OS一般方法是重新启动,在按下电源键启动时立即按下command+R进入恢复模式或者是Option键选择启动盘,将mac OS从网上恢复至固态盘上。 在这一步时遇到了困难,在按下电源键时,无论command+R或者时Option键都不省生效,mac mini总是自己启动到原来的系统下。网上百度了几个办法,这个问题还是解决不了,只好放弃。 重启时进不了恢复模式,也选择不了启动盘,只好想别的办法。有一个办法似乎可行,就是在系统启动后把mac OS系统下载下来,在系统运行时进行安装。4 下载mac os 进入app store 搜过mac os,果然找到了点击获取就可以下载了,这个镜像在12G左右,下载完了之后会自动运行,按照提示操作,在安装磁盘选择时选择外置的固态盘,经历过过几次重启之后,终于从固态盘启动成功了。系统启动时机械盘讨厌的哒哒声消失了,代替的是外置固态盘一闪一闪的蓝灯,系统启动的时间也缩短了很多,运行时经常转的圈圈也少了很多,就像换了一台电脑。

MySQL--理解innodb存储引擎状态报告

MySQL数据库Innodb存储引擎的状态报告时MySQL数据库性能分析和诊断的重要的工具。这个状态报告涉及的内容很多,包含存储引擎的各个方面,涉及的知识点也比较多,理解起来比较困难。本文通过实际案例来说明这个状态报告中比较重要部分的含义及这个部分如何用于性能诊断。1 环境准备 本文通过sysbench模拟数据库中的业务负载,通过观察数据库在由于负载的情况下innodb状态报告的变化来说明各个部分的含义及如何用于数据库性能诊断。查看状态之前先准备一下数据。docker run --rm=true --name=sb-prepare 0e71335a2211 sysbench --test=/usr/share/sysbench/oltp_read_only.lua --mysql-host=172.20.11.244 --mysql-port=3306 --mysql-db=sysbenchdb --mysql-user="u_sysbench" --mysql-password='123456' --tables=4 --table_size=100000 --threads=2 --time=300 --report-interval=3 --db-driver=mysql --db-ps-mode=disable --skip-trx=on --mysql-ignore-errors=6002,6004,4012,2013,4016,1062 prepare 这里使用docker运行sysbench,向数据库内装载数据,创建4个表,每个表10万行数据,通过容器的输出可以看到sysbench在创建表、载入数据后,为每个表有创建了二级索引。登录数据库查看索引情况mysql> show index from sbtest4; +---------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +---------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | sbtest4 | 0 | PRIMARY | 1 | id | A | 98712 | NULL | NULL | | BTREE | | | | sbtest4 | 1 | k_4 | 1 | k | A | 17501 | NULL | NULL | | BTREE | | | +---------+------------+----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ 2 rows in set (0.00 sec) 可以看到,表列k上已有名为k_4的二级索引,为了模拟的环境真实一些,drop掉这个二级索引。mysql> alter table sbtest4 drop key k_4; Query OK, 0 rows affected (0.02 sec) Records: 0 Duplicates: 0 Warnings: 02 业务加载前的innodb状态报告 Innodb存储引擎的状态报告用show engine innodb status命令获得mysql> show engine innodb status\G; *************************** 1. row *************************** Type: InnoDB Name: Status: ===================================== 2022-08-24 10:26:21 0x7fd3b8553700 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 37 seconds ----------------- BACKGROUND THREAD ----------------- srv_master_thread loops: 15 srv_active, 0 srv_shutdown, 1119 srv_idle srv_master_thread log flush and writes: 1134 ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 164 OS WAIT ARRAY INFO: signal count 149 RW-shared spins 0, rounds 68, OS waits 34 RW-excl spins 0, rounds 751, OS waits 12 RW-sx spins 37, rounds 1110, OS waits 37 Spin rounds per wait: 68.00 RW-shared, 751.00 RW-excl, 30.00 RW-sx ------------ TRANSACTIONS ------------ Trx id counter 17916 Purge done for trx's n:o < 17915 undo n:o < 0 state: running but idle' History list length 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 422022744110928, not started 0 lock struct(s), heap size 1136, 0 row lock(s) -------- -------------- ROW OPERATIONS -------------- 0 queries inside InnoDB, 0 queries in queue 0 read views open inside InnoDB Process ID=45169, Main thread ID=140547469104896, state: sleeping Number of rows inserted 400000, updated 0, deleted 0, read 8 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ 1 row in set (0.01 sec) ERROR: No query specified 上面的报告中删除了一些不太重要的内容,读这个报告之前,首先要知道这个报告里的值的时间含义时不同的,正如报告开头所说的,报告里每秒的平均值的计算时间时过去37秒,也就是说这个平均值是当前的,几乎时实时的,报告里的其他值则是累加的,是从数据库启动以来的这个值的总数。 报告的第一部分是主线程的状态,如果主线程srv_active loop数量比较多,则数据库的负载比较重。 报告的第二部分是信号量的状态,这个部分十分重要,理解起来也十分困难,如果理解了这个部分,通过信号量的状态对数据库目前的加锁情况就会有一个准确的理解。 Innodb的锁有三种RW-shared,RW-excl,RW-sx,简单来说,会话再获得一个锁时,如果不能获得就会进入自旋状态,在这个状态下,该会话仍然占用当前cpu,经过一段时间后会再次视图获取锁,如果经过一定的次数后仍未获得锁,会话就会进入os wait状态,这时会话会释放CPU,等待操作系统唤醒后再此进入自旋状态。 这个部分的值是累加的,所以需要对比相邻两个时间段的值才有意义。 后面的事务部分和行操作部分后面再做解释。3 业务加载,查看innodb状态报告 运行sysbench,模拟业务加载。 docker run --name=sb-run 0e71335a2211 sysbench --test=/usr/share/sysbench/oltp_read_only.lua --mysql-host=172.20.11.244 --mysql-port=3306 --mysql-db=sysbenchdb --mysql-user="u_sysbench" --mysql-password=123456 --tables=4 --table_size=100000 --threads=4 --time=2000 --report-interval=10 --db-driver=mysql --db-ps-mode=disable --skip-trx=on --mysql-ignore-errors=6002,6004,4012,2013,4016,1062 run sysbench运行了4个线程,运行的是只读查询。加载后,系统的负载也发生了变化,这个可以用操作系统的top命令看出。 [root@ ~]# top -p 45169 -b -n 1 -H top - 11:12:51 up 9 days, 1:34, 3 users, load average: 4.29, 4.41, 4.05 Threads: 32 total, 2 running, 30 sleeping, 0 stopped, 0 zombie %Cpu(s): 67.8 us, 23.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.0 hi, 8.0 si, 0.0 st MiB Mem : 1816.9 total, 120.1 free, 564.9 used, 1131.9 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 1212.0 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45197 mysql 20 0 1167480 333144 22700 R 19.6 17.9 7:28.07 mysqld 45318 mysql 20 0 1167480 333144 22700 S 19.6 17.9 6:29.69 mysqld 45456 mysql 20 0 1167480 333144 22700 S 19.6 17.9 7:26.48 mysqld 45455 mysql 20 0 1167480 333144 22700 R 19.3 17.9 7:26.47 mysqld 45169 mysql 20 0 1167480 333144 22700 S 0.0 17.9 0:00.26 mysqld MySQL下面运行了32个线程,cpu利用率达到了67.8%,有四个线程(对应sysbench)的四个线程的cpu利用率在19%左右。 看一下这个时候的引擎状态报告,由于报告比较长,对各部分的解释以注释的形式给出,不再在报告后单独说明。 mysql> show engine innodb status\G; *************************** 1. row *************************** Type: InnoDB Name: Status: ===================================== 2022-08-24 10:58:51 0x7fd3b848d700 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 27 seconds ----------------- BACKGROUND THREAD ----------------- srv_master_thread loops: 33 srv_active, 0 srv_shutdown, 3051 srv_idle srv_master_thread log flush and writes: 3084 ----------主线程的活跃loop有所增加,但是增加不大。 SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 166 OS WAIT ARRAY INFO: signal count 151 RW-shared spins 0, rounds 68, OS waits 34 RW-excl spins 0, rounds 751, OS waits 12 RW-sx spins 37, rounds 1110, OS waits 37 Spin rounds per wait: 68.00 RW-shared, 751.00 RW-excl, 30.00 RW-sx ------------这部分数据同前面的几乎报告相同,说明数据库这段时间几乎没有加锁。 TRANSACTIONS ------------ Trx id counter 17916 Purge done for trx's n:o < 17915 undo n:o < 0 state: running but idle History list length 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 422022744113664, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744112752, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744111840, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744110928, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744114576, not started 0 lock struct(s), heap size 1136, 0 row lock(s) --------数据库里有了活跃的事务,这个事务都没有枷锁 FILE I/O -------- I/O thread 0 state: waiting for completed aio requests (insert buffer thread) I/O thread 1 state: waiting for completed aio requests (log thread) I/O thread 2 state: waiting for completed aio requests (read thread) I/O thread 3 state: waiting for completed aio requests (read thread) I/O thread 4 state: waiting for completed aio requests (read thread) I/O thread 5 state: waiting for completed aio requests (read thread) I/O thread 6 state: waiting for completed aio requests (write thread) I/O thread 7 state: waiting for completed aio requests (write thread) I/O thread 8 state: waiting for completed aio requests (write thread) I/O thread 9 state: waiting for completed aio requests (write thread) Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] , ibuf aio reads:, log i/o's:, sync i/o's: Pending flushes (fsync) log: 0; buffer pool: 0 2108 OS file reads, 7329 OS file writes, 1092 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------IO进程也不活跃,这段时间的平均值为0 ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 0, seg size 2, 0 merges merged operations: insert 0, delete mark 0, delete 0 discarded operations: insert 0, delete mark 0, delete 0 Hash table size 34673, node heap has 0 buffer(s) Hash table size 34673, node heap has 0 buffer(s) Hash table size 34673, node heap has 175 buffer(s) Hash table size 34673, node heap has 0 buffer(s) Hash table size 34673, node heap has 173 buffer(s) Hash table size 34673, node heap has 0 buffer(s) Hash table size 34673, node heap has 0 buffer(s) Hash table size 34673, node heap has 0 buffer(s) 6851.86 hash searches/s, 3947.78 non-hash searches/s ---哈希表的buffer值有所增加,这个时间间隔内发生了6851次hash查找, ---innodb运用了自适应哈希技术对sql语句进行了优化, Log sequence number 10116339108 Log flushed up to 10116339108 Pages flushed up to 10116339108 Last checkpoint at 10116339099 0 pending log flushes, 0 pending chkp writes 293 log i/o's done, 0.00 log i/o's/second ----------------------由于是只读测试,没有log产生 BUFFER POOL AND MEMORY ---------------------- Total large memory allocated 137428992 Dictionary memory allocated 331505 Buffer pool size 8191 Free buffers 1022 Database pages 6821 Old database pages 2497 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages made young 2120, not young 503 0.00 youngs/s, 0.00 non-youngs/s Pages read 2073, created 5907, written 6319 0.00 reads/s, 0.00 creates/s, 0.00 writes/s Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000 Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s LRU len: 6821, unzip_LRU len: 0 I/O sum[0]:cur[0], unzip sum[0]:cur[0] --缓冲池计划没什么变化 -------------- ROW OPERATIONS -------------- 0 queries inside InnoDB, 0 queries in queue 0 read views open inside InnoDB Process ID=45169, Main thread ID=140547469104896, state: sleeping Number of rows inserted 400083, updated 0, deleted 0, read 310205858 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 203224.58 reads/s ----------------------------这段时间内读次数较多,达到 203224.58每秒。 END OF INNODB MONITOR OUTPUT ============================ 1 row in set (0.00 sec) ERROR: No query specified4 行锁时的innodb状态报告开两个会话,模拟数据库发生行锁mysql> update country set country='Afuhan' where country_id=1; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 --用户会话2 mysql> set autocommit=0; mysql> show variables like 'autocommit%'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | autocommit | OFF | +---------------+-------+ 1 row in set (0.00 sec) mysql> update country set country='afuhan' where country_id=1; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction 可以看到,MySQL的行锁有超时设置,超时后会尝试重启事务。数据库产生行锁时,事务部分的内容变化最大,这里只截取这部分内容。TRANSACTIONS ------------ Trx id counter 17943 Purge done for trx's n:o < 17941 undo n:o < 0 state: running but idle History list length 1 LIST OF TRANSACTIONS FOR EACH SESSION: --会话事务列表,列出了所有会话 ---TRANSACTION 422022744116400, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744114576, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744113664, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744112752, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 422022744111840, not started 0 lock struct(s), heap size 1136, 0 row lock(s) ---TRANSACTION 17942, ACTIVE 10 sec starting index read --这个事务持有锁,使用并锁定了一个表 mysql tables in use 1, locked 1 LOCK WAIT 2 lock struct(s), heap size 1136, 1 row lock(s) --事务的线程id,查询id,执行的语句。 MySQL thread id 22, OS thread handle 140547306608384, query id 20019936 localhost root updating update country set country='afuhan' where country_id=1 ---事务等待的锁 ------- TRX HAS BEEN WAITING 10 SEC FOR THIS LOCK TO BE GRANTED: RECORD LOCKS space id 27 page no 3 n bits 176 index PRIMARY of table `sakila`.`country` trx id 17942 lock_mode X locks rec but not gap waiting Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 0 0: len 2; hex 0001; asc ;; 1: len 6; hex 000000004612; asc F ;; 2: len 7; hex 74000001610237; asc t a 7;; 3: len 6; hex 41667568616e; asc Afuhan;; 4: len 4; hex 6305beb7; asc c ;; ------------------ ---TRANSACTION 17941, ACTIVE 17 sec --这个事务持有一个行锁 2 lock struct(s), heap size 1136, 1 row lock(s) MySQL thread id 27, OS thread handle 140547306338048, query id 19970640 localhost root 从上面截取的片段可以看出,innodb引擎的状态报告里会列出不阻塞事务的详细信息,诸如执行的语句,等待的锁,对应的线程等,从这些信息,基本可以判断出行锁的来源。

AnalyticDB MySQL--读懂执行计划

1 AnalyticDB MySQL中sql语句的执行过程 AnalyticDB MySQL采用三层架构,分为接入层、计算层和存储层,这个分层和MySQL等数据库也是基本一致的,由于AnalyticDB MySQL是用于数据仓库的分析型数据库,其底层的实现同MySQL完全不同。 在AnalyticDB MySQ,这三本层次都是分布式的,接入层主要负责协议层接入、SQL解析和优化、实时写入Sharding、数据调度和查询调度,是sql执行计划中的控制节点;计算层是阿里的羲和分析计算引擎,具备分布式MPP和DAG融合执行能力,结合智能优化器,可支持高并发和复杂SQL混合负载,存储层是阿里的玄武分析存储引擎,基于Raft协议实现的分布式实时强一致高可用存储引擎,通过数据分片和Multi-Raft实现并行。阿里官网的架构图如下: 对应这三个层次,在AnalyticDB MySQ中,一条语句的执行流程是这样的: 1 客户端将SQL语句提交到AnalyticDB MySQL版的前端接入节点(即Controller节点)。Controller节点中解析器(Parser)会对SQL语句进行解析优化并生成最终的逻辑执行计划(Plan),并根据数据是否需要在网络间传输来决定是否需要对计划进行切分成多个stage。逻辑执行计划中也会规定特定的执行处理方式,例如Join类型、Join顺序、聚合方式以及数据重分布方式等。 执行计划任务的节点(即Executor节点)会接收最终的逻辑执行计划并将其转化成物理执行计划。物理执行计划由Stage和算子(Operator)组成,在计算层进行执行,stage在每一个计算节点上的执行被称为任务。任务可以有一个或多个算子组成。任务的执行中,计算节点从存储节点获取数据,也可能会将过滤等操作下推到存储节点执行。 Executor节点将数据处理的最终结果返回到客户端,或者写入AnalyticDB MySQL版集群的内部表以及其它外部存储系统(如OSS)中。2 AnalyticDB MySQL 执行计划的基本概念 AnalyticDB MySQL中的语句大多是并行执行的,其执行计划具有分布式和并行执行的特点,执行计划中有几个基本概念也和MySQL不同,读懂执行计划的前提是对这些基本概念要清楚,主要有stage、Task和算子三个基本概念。 Stage(执行阶段),AnalyticDB MySQL版中的查询会首先被切分为多个Stage来执行,一个Stage就是执行计划中某一部分的物理实体。Stage的数据来源可以是底层存储系统中的数据或者网络中传输的数据,一个Stage由分布在不同Executor节点上相同类型的Task组成,多个Task会并行处理数据。 Task是一个Stage在某个Executor节点上的执行实体,多个同类型的Task组成一个Stage,在集群内部并行处理数据。 算子(Operator)算子是AnalyticDB MySQL版的基本数据处理单元。AnalyticDB MySQL版会根据算子所表达的语义或算子间的依赖关系,决定使用并行还是串行执行来处理数据。 这三个概念的之间的关系如下图所示:3 AnalyticDB MySQL中stage stage是执行计划的重要阶段,通过一个具体的stage的例子来看一下就明白了。 上面的是来源是阿里官网,是AnalyticDB MySQL版的SQL诊断功能以树形图的形式展现SQL查询的执行计划的stage层执行计划树。stage是下网上执行的,先由具有扫描算子的Stage进行数据扫描,再经过中间Stage节点的层层处理后,再由最上层的根节点将查询结果返回客户端。 数据由下层流向上层的有三种方式,如果不理解这三种方式,就很难理解执行计划。这三种方式是广播(broadcast),重分区(repartition),和汇集(gather)。看一下官网的三个图,对这三种方式就一目了然了。 图里的上游下游是时间上的概念,而不是空间或层次上的概念,数据总是从上游流转到下游。3.1 广播 上游的表被复制到下游的每个节点3.2 重分区 上游的分区分布和下游要求的分区分布方式不同,根据下游的要求重新在每个节点上分布分区。3.3 汇集 上游的数据被汇集到一个下游节点上进行汇总计算。4 算子 执行计划中的任务(task)不必多做解释,一个阶段(stage)在一个节点上的执行就是任务。 执行计划里的算子因为涉及到分布式和并行,出现了几个在MySQL中没有的算子,需要说明一下。4.1 分布式和并行相关算子 RemoteSource 该算子用来表示当前Stage的输入数据是通过网络从远程节点传输过来的。 RemoteExchange 该算子用来表示上游向下游Stage传输数据时所用的方法。上下游Stage间传输数据的方法有如下几种: A)Broadcast:表示上游Stage中每个计算节点的数据都会复制到所有下游Stage的计算节点。 B)Repartition:表示上游Stage中每个节点的数据会按照固定的规则切分后,再分发到下游Stage的指定计算节点。 C)Gather:表示上游Stage中每个节点的数据会集中到下游Stage中某一个特定的计算节点。 可以看出来,RemoteExchange三个方法和stage间传递数据的三种方式相对应。4.2 其它算子 其它算子比较好理解,同MySQL数据库执行计划中的操作基本类似,如Aggregation算子通过sum()、count()、avg()等函数对数据进行聚合或分组聚合操作DistinctLimit算子对应SQL语句中的DISTINCT LIMIT,MarkDistinct对应SQL语句中的count(DISTINCT)操作,Project对应SQL语句中对特定字段的投影操作。 有一个算子也是Analytic DB MySQL特有的,这个就是StageOutput算子,这个算子用于将当前Stage处理后的数据通过网络传输到下游Stage的节点。4.3 算子的执行算子怎样组成任务可以从下面图中看出来。5 解释和分析执行计划 有了上面的准备,我们可以解释和分析AnalyticDB MySQL的执行计划了,下面这个例子来自官网。要分析的sql语句先列出来。SELECT count(*) nation, region, customer WHERE c_nationkey = n_nationkey AND n_regionkey = r_regionkey AND r_name = 'ASIA'; 这条sql语句并不复杂,是一个三个表的内连接,获得区域为亚洲的用户的数量,限制条件加载region表上。这条语句的执行计划看起来要复杂一些Output[count(*)] │ Outputs: [count:bigint] │ Estimates: {rows: 1 (8B)} │ count(*) := count └─ Aggregate(FINAL) │ Outputs: [count:bigint] │ Estimates: {rows: 1 (8B)} │ count := count(`count_1`) └─ LocalExchange[SINGLE] () │ Outputs: [count_0_1:bigint] │ Estimates: {rows: 1 (8B)} └─ RemoteExchange[GATHER] │ Outputs: [count_0_2:bigint] │ Estimates: {rows: 1 (8B)} └─ Aggregate(PARTIAL) │ Outputs: [count_0_4:bigint] │ Estimates: {rows: 1 (8B)} │ count_4 := count(*) └─ InnerJoin[(`c_nationkey` = `n_nationkey`)][$hashvalue, $hashvalue_0_6] │ Outputs: [] │ Estimates: {rows: 302035 (4.61MB)} │ Distribution: REPLICATED ├─ Project[] │ │ Outputs: [c_nationkey:integer, $hashvalue:bigint] │ │ Estimates: {rows: 1500000 (5.72MB)} │ │ $hashvalue := `combine_hash`(BIGINT '0', COALESCE(`$operator$hash_code`(`c_nationkey`), 0)) │ └─ RuntimeFilter │ │ Outputs: [c_nationkey:integer] │ │ Estimates: {rows: 1500000 (5.72MB)} │ ├─ TableScan[adb:AdbTableHandle{schema=tpch, tableName=customer, partitionColumnHandles=[c_custkey]}] │ │ Outputs: [c_nationkey:integer] │ │ Estimates: {rows: 1500000 (5.72MB)} │ │ c_nationkey := AdbColumnHandle{columnName=c_nationkey, type=4, isIndexed=true} │ └─ RuntimeCollect │ │ Outputs: [n_nationkey:integer] │ │ Estimates: {rows: 5 (60B)} │ └─ LocalExchange[ROUND_ROBIN] () │ │ Outputs: [n_nationkey:integer] │ │ Estimates: {rows: 5 (60B)} │ └─ RuntimeScan │ Outputs: [n_nationkey:integer] │ Estimates: {rows: 5 (60B)} └─ LocalExchange[HASH][$hashvalue_0_6] ("n_nationkey") │ Outputs: [n_nationkey:integer, $hashvalue_0_6:bigint] │ Estimates: {rows: 5 (60B)} └─ Project[] │ Outputs: [n_nationkey:integer, $hashvalue_0_10:bigint] │ Estimates: {rows: 5 (60B)} │ $hashvalue_10 := `combine_hash`(BIGINT '0', COALESCE(`$operator$hash_code`(`n_nationkey`), 0)) └─ RemoteExchange[REPLICATE] │ Outputs: [n_nationkey:integer] │ Estimates: {rows: 5 (60B)} └─ InnerJoin[(`n_regionkey` = `r_regionkey`)][$hashvalue_0_7, $hashvalue_0_8] │ Outputs: [n_nationkey:integer] │ Estimates: {rows: 5 (60B)} │ Distribution: REPLICATED ├─ Project[] │ │ Outputs: [n_nationkey:integer, n_regionkey:integer, $hashvalue_0_7:bigint] │ │ Estimates: {rows: 25 (200B)} │ │ $hashvalue_7 := `combine_hash`(BIGINT '0', COALESCE(`$operator$hash_code`(`n_regionkey`), 0)) │ └─ RuntimeFilter │ │ Outputs: [n_nationkey:integer, n_regionkey:integer] │ │ Estimates: {rows: 25 (200B)} │ ├─ TableScan[adb:AdbTableHandle{schema=tpch, tableName=nation, partitionColumnHandles=[]}] │ │ Outputs: [n_nationkey:integer, n_regionkey:integer] │ │ Estimates: {rows: 25 (200B)} │ │ n_nationkey := AdbColumnHandle{columnName=n_nationkey, type=4, isIndexed=true} │ │ n_regionkey := AdbColumnHandle{columnName=n_regionkey, type=4, isIndexed=true} │ └─ RuntimeCollect │ │ Outputs: [r_regionkey:integer] │ │ Estimates: {rows: 1 (4B)} │ └─ LocalExchange[ROUND_ROBIN] () │ │ Outputs: [r_regionkey:integer] │ │ Estimates: {rows: 1 (4B)} │ └─ RuntimeScan │ Outputs: [r_regionkey:integer] │ Estimates: {rows: 1 (4B)} └─ LocalExchange[HASH][$hashvalue_0_8] ("r_regionkey") │ Outputs: [r_regionkey:integer, $hashvalue_0_8:bigint] │ Estimates: {rows: 1 (4B)} └─ ScanProject[table = adb:AdbTableHandle{schema=tpch, tableName=region, partitionColumnHandles=[]}] Outputs: [r_regionkey:integer, $hashvalue_0_9:bigint] Estimates: {rows: 1 (4B)}/{rows: 1 (B)} $hashvalue_9 := `combine_hash`(BIGINT '0', COALESCE(`$operator$hash_code`(`r_regionkey`), 0)) r_regionkey := AdbColumnHandle{columnName=r_regionkey, type=4, isIndexed=true} 执行计划的localExchange是在本地节点进行的对数据的操作,RuntimeCollect是从前面的算子获得的数据。  执行计划从下往上看,最下面一个算子是ScanProject,扫描的是region表,扫描后的数据由LocalExchange[HASH]算子进行哈希计算,然后和nation表进行哈希join,join的键是n_regionkey和r_regionkey,join的输出是n_nationkey,join操作的执行计划看起来比较复杂,可以看到对nation表的TableScan操作,以及执行中获得的对region表哈希操作后的数据,这个数据只有一行。这个join的结果再和用户表进行join后再每个节点上进行部分聚合(Aggregate(PARTIAL),最后汇集(RemoteExchange[GATHER])到一个节点上进行最后聚合(Aggregate(FINAL)),然后输出(Output[count(*)])。

Oracle21C--使用RMAN备份和恢复容器数据库

使用rman备份的前提数据库处于归档模式下,在Oracle 21C中,可插拔数据库和根容器数据库使用同一组在线日志文件和归档日志文件,只要打开了容器数据库的归档模式,它下面的可插拔数据库的归档模式也一起打开了。打开数据库归档模式后,就可以使用rman进行数据库在线备份了。1 执行数据库全备使用rman登录数据库[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ rman target /查看一下rman现在的配置参数RMAN> show all; RMAN configuration parameters for database with db_unique_name ORCL are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP ON; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/dbs/snapcf_orcl.f'; # default从现有的配置参数来看,控制文件自动备份是默认打开的,这样在进行数据库备份时,也会备份控制文件和参数文件。看一下连接数据库的schemaRMAN> report schema; using target database control file instead of recovery catalog Report of database schema for database with db_unique_name ORCL List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1 1340 SYSTEM YES /opt/oracle/oradata/ORCL/system01.dbf 3 620 SYSAUX NO /opt/oracle/oradata/ORCL/sysaux01.dbf 4 110 UNDOTBS1 YES /opt/oracle/oradata/ORCL/undotbs01.dbf 5 280 PDB$SEED:SYSTEM NO /opt/oracle/oradata/ORCL/pdbseed/system01.dbf 6 340 PDB$SEED:SYSAUX NO /opt/oracle/oradata/ORCL/pdbseed/sysaux01.dbf 7 5 USERS NO /opt/oracle/oradata/ORCL/users01.dbf 8 100 PDB$SEED:UNDOTBS1 NO /opt/oracle/oradata/ORCL/pdbseed/undotbs01.dbf 10 280 PDB1:SYSTEM YES /opt/oracle/oradata/ORCL/pdb1/system01.dbf 11 350 PDB1:SYSAUX NO /opt/oracle/oradata/ORCL/pdb1/sysaux01.dbf 12 100 PDB1:UNDOTBS1 YES /opt/oracle/oradata/ORCL/pdb1/undotbs01.dbf 13 100 PDB1:TBS_TEST NO /opt/oracle/oradata/ORCL/pdb1/test01.dbf List of Temporary Files ======================= File Size(MB) Tablespace Maxsize(MB) Tempfile Name ---- -------- -------------------- ----------- -------------------- 1 237 TEMP 32767 /opt/oracle/oradata/ORCL/temp01.dbf 2 35 PDB$SEED:TEMP 32767 /opt/oracle/oradata/ORCL/pdbseed/temp012022-08-18_17-18-50-054-PM.dbf 3 35 PDB1:TEMP 32767 /opt/oracle/oradata/ORCL/pdb1/temp012022-08-18_17-18-50-054-PM.dbf 从report schema的输出中可以看到容器数据库,PDB种子数据库、PDB数据库的表空间和数据文件。 执行数据库备份,同时备份归档日志文件。RMAN> backup database plus archivelog; --先备份归档日志 Starting backup at 19-AUG-22 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=3 RECID=1 STAMP=1113130687 channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0115i1m0_1_1_1 tag=TAG20220819T105808 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 Finished backup at 19-AUG-22 --备份容器数据库的数据文件 Starting backup at 19-AUG-22 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=/opt/oracle/oradata/ORCL/system01.dbf input datafile file number=00003 name=/opt/oracle/oradata/ORCL/sysaux01.dbf input datafile file number=00004 name=/opt/oracle/oradata/ORCL/undotbs01.dbf input datafile file number=00007 name=/opt/oracle/oradata/ORCL/users01.dbf channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 --备份可插拔数据库pdb1的数据文件 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0215i1m3_2_1_1 tag=TAG20220819T105811 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00011 name=/opt/oracle/oradata/ORCL/pdb1/sysaux01.dbf input datafile file number=00010 name=/opt/oracle/oradata/ORCL/pdb1/system01.dbf input datafile file number=00012 name=/opt/oracle/oradata/ORCL/pdb1/undotbs01.dbf input datafile file number=00013 name=/opt/oracle/oradata/ORCL/pdb1/test01.dbf channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 --备份种子数据库的数据文件 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0315i1n6_3_1_1 tag=TAG20220819T105811 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00006 name=/opt/oracle/oradata/ORCL/pdbseed/sysaux01.dbf input datafile file number=00005 name=/opt/oracle/oradata/ORCL/pdbseed/system01.dbf input datafile file number=00008 name=/opt/oracle/oradata/ORCL/pdbseed/undotbs01.dbf channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0415i1nm_4_1_1 tag=TAG20220819T105811 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 Finished backup at 19-AUG-22 --将现有redo日志归档后再次备份归档日志,这样备份集里就包含了至当前时刻的所有redo日志。 Starting backup at 19-AUG-22 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=4 RECID=2 STAMP=1113130757 channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0515i1o6_5_1_1 tag=TAG20220819T105918 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 19-AUG-22 --备份控制文件和参数文件 Starting Control File and SPFILE Autobackup at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/c-1640847823-20220819-00 comment=NONE Finished Control File and SPFILE Autobackup at 19-AUG-22也可以对单独的可插拔数据执行备份RMAN> backup pluggable database pdb1 plus archivelog; Starting backup at 19-AUG-22 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=3 RECID=1 STAMP=1113130687 input archived log thread=1 sequence=4 RECID=2 STAMP=1113130757 input archived log thread=1 sequence=5 RECID=3 STAMP=1113130941 channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0715i1tt_7_1_1 tag=TAG20220819T110221 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03 Finished backup at 19-AUG-22 Starting backup at 19-AUG-22 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00011 name=/opt/oracle/oradata/ORCL/pdb1/sysaux01.dbf input datafile file number=00010 name=/opt/oracle/oradata/ORCL/pdb1/system01.dbf input datafile file number=00012 name=/opt/oracle/oradata/ORCL/pdb1/undotbs01.dbf input datafile file number=00013 name=/opt/oracle/oradata/ORCL/pdb1/test01.dbf channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0815i1u0_8_1_1 tag=TAG20220819T110224 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15 Finished backup at 19-AUG-22 Starting backup at 19-AUG-22 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=6 RECID=4 STAMP=1113130960 channel ORA_DISK_1: starting piece 1 at 19-AUG-22 channel ORA_DISK_1: finished piece 1 at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0915i1uh_9_1_1 tag=TAG20220819T110240 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 19-AUG-22 Starting Control File and SPFILE Autobackup at 19-AUG-22 piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/c-1640847823-20220819-01 comment=NONE Finished Control File and SPFILE Autobackup at 19-AUG-22 这里就只对容器数据库的数据文件进行了备份,其它第归档日志和控制文件、参数文件的备份是相同的。2 模拟数据库故障 这次模拟的故障是数据库有未提交事务时,当前redo日志故障,这种情况下需要执行数据库的不完全恢复。故障发生时数据库已经关闭。3 执行数据库不完全恢复3.1 用rman连接并启动数据库[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ rman target / connected to target database (not started) --目标数据库没有启动,启动目标数据库至mount模式 RMAN> startup mount; Oracle instance started database mounted Total System Global Area 763362712 bytes Fixed Size 9690520 bytes Variable Size 549453824 bytes Database Buffers 201326592 bytes Redo Buffers 2891776 bytes3.2 查看故障情况RMAN> list failure; using target database control file instead of recovery catalog Database Role: PRIMARY List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------- 202 CRITICAL OPEN 19-AUG-22 Online log group 1 is unavailable 205 HIGH OPEN 19-AUG-22 Online log member /opt/oracle/oradata/ORCL/redo01.log is corrupt可以查看故障的详细情况RMAN> list failure detail; Database Role: PRIMARY List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------- 202 CRITICAL OPEN 19-AUG-22 Online log group 1 is unavailable Impact: Database might be unrecoverable or become unrecoverable Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------- 205 HIGH OPEN 19-AUG-22 Online log member /opt/oracle/oradata/ORCL/redo01.log is corrupt Impact: Redo log group may become unavailable3.3 查看修复建议 rman现在具有修复建议功能,可以根据数据库故障的不同推荐响应的修复方案RMAN> advise failure; Database Role: PRIMARY List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------- 202 CRITICAL OPEN 19-AUG-22 Online log group 1 is unavailable Impact: Database might be unrecoverable or become unrecoverable Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------- 205 HIGH OPEN 19-AUG-22 Online log member /opt/oracle/oradata/ORCL/redo01.log is corrupt Impact: Redo log group may become unavailable analyzing automatic repair options; this may take some time allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=430 device type=DISK analyzing automatic repair options complete Mandatory Manual Actions ======================== no manual actions available Optional Manual Actions ======================= no manual actions available Automated Repair Options ======================== Option Repair Description ------ ------------------ 1 Perform incomplete database recovery to SCN 2743364 Strategy: The repair includes point-in-time recovery with some data loss Repair script: /opt/oracle/diag/rdbms/orcl/orcl/hm/reco_18471703.hm rman对数据库故障的情况进行了分析,提供了自动修复方案及修复脚本,如果数据库之前没有执行过备份,这里会显示没有可用的修复选项。 由于丢失的当前在线事务组日志文件,只能执行不完全恢复,恢复方案里给出了数据库可以恢复到的最大的SCN。修复建议给出的脚本可用文本编辑器来查看。[root@ ~]# cat /opt/oracle/diag/rdbms/orcl/orcl/hm/reco_18471703.hm # database point-in-time recovery restore database until scn 2743364; recover database until scn 2743364; alter database open resetlogs; 脚本一看可知,是数据库恢复的标准步骤。3.4 修复数据库 可以手动运行来进行数据库恢复,rman也提供了修复数据库的命令,运行一下就会执行修复脚本。RMAN> repair failure; --修复策略说明 Strategy: The repair includes point-in-time recovery with some data loss Repair script: /opt/oracle/diag/rdbms/orcl/orcl/hm/reco_18471703.hm --修复脚本的内容 contents of repair script: # database point-in-time recovery restore database until scn 2743364; recover database until scn 2743364; alter database open resetlogs; --建议yes后就可以执行修复脚本 Do you really want to execute the above repair (enter YES or NO)? yes executing repair script --还原数据文件 Starting restore at 19-AUG-22 using channel ORA_DISK_1 --不需要还原的数据文件(pdb种子数据库的数据文件)自动跳过 skipping datafile 5; already restored to file /opt/oracle/oradata/ORCL/pdbseed/system01.dbf skipping datafile 6; already restored to file /opt/oracle/oradata/ORCL/pdbseed/sysaux01.dbf skipping datafile 8; already restored to file /opt/oracle/oradata/ORCL/pdbseed/undotbs01.dbf --还原数据文件 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to /opt/oracle/oradata/ORCL/system01.dbf channel ORA_DISK_1: restoring datafile 00003 to /opt/oracle/oradata/ORCL/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00004 to /opt/oracle/oradata/ORCL/undotbs01.dbf channel ORA_DISK_1: restoring datafile 00007 to /opt/oracle/oradata/ORCL/users01.dbf channel ORA_DISK_1: reading from backup piece /opt/oracle/homes/OraDBHome21cEE/dbs/0215i1m3_2_1_1 channel ORA_DISK_1: piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0215i1m3_2_1_1 tag=TAG20220819T105811 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:35 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00010 to /opt/oracle/oradata/ORCL/pdb1/system01.dbf channel ORA_DISK_1: restoring datafile 00011 to /opt/oracle/oradata/ORCL/pdb1/sysaux01.dbf channel ORA_DISK_1: restoring datafile 00012 to /opt/oracle/oradata/ORCL/pdb1/undotbs01.dbf channel ORA_DISK_1: restoring datafile 00013 to /opt/oracle/oradata/ORCL/pdb1/test01.dbf channel ORA_DISK_1: reading from backup piece /opt/oracle/homes/OraDBHome21cEE/dbs/0815i1u0_8_1_1 channel ORA_DISK_1: piece handle=/opt/oracle/homes/OraDBHome21cEE/dbs/0815i1u0_8_1_1 tag=TAG20220819T110224 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 Finished restore at 19-AUG-22 --开始恢复数据库 Starting recover at 19-AUG-22 using channel ORA_DISK_1 --执行介质恢复 starting media recovery archived log for thread 1 with sequence 4 is already on disk as file /opt/oracle/homes/OraDBHome21cEE/dbs/arch1_4_1113067087.dbf archived log for thread 1 with sequence 5 is already on disk as file /opt/oracle/homes/OraDBHome21cEE/dbs/arch1_5_1113067087.dbf archived log for thread 1 with sequence 6 is already on disk as file /opt/oracle/homes/OraDBHome21cEE/dbs/arch1_6_1113067087.dbf archived log file name=/opt/oracle/homes/OraDBHome21cEE/dbs/arch1_4_1113067087.dbf thread=1 sequence=4 archived log file name=/opt/oracle/homes/OraDBHome21cEE/dbs/arch1_5_1113067087.dbf thread=1 sequence=5 media recovery complete, elapsed time: 00:00:00 Finished recover at 19-AUG-22 Statement processed repair failure complete4 打开并连接数据库 使用rman修复数据库完成后,数据库已经处于open状态,由于数据库没有配置启动可插拔数据库的启动触发器,可插拔数据库处于mount状态,需要运行alter pluggable database命令将其打开。[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ sqlplus / as sysdba SQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PDB1 MOUNTED SQL> alter pluggable database PDB1 open; Pluggable database altered.登录可插拔数据库pdb1,查看表中的数据 [oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ sqlplus test/test123@iZ2ze0t8khaprrpfvmevjiZ/pdb1 SQL> select * from test; no rows selected表恢复到了事务运行之前的状态。 可以看到,使用rman备份数据库十分简单方便,rman的数据库修复功能可以避免大量的手动操作,可以有效避免繁琐复杂操作造成的失误。

Oracle-使用DBCA silent模式创建数据库

DBA和后台开发人员偶尔会遇到要重建数据库的情况,大多数情况下,可以使用DBCA配置助手,以交互式方式,在图形界面中完成。但是在有些特殊的情况下,Oracle数据库服务器上没有安装图形界面,临时安装图形界面有比较困难或费时。这时该怎么办? 遇到这种情况,有经验的DBA可以通过create database 命令创建,这种方式步骤比较繁琐,需要理解大量的创建参数,需要对Oracle数据库有比较深入的了解,有没有其它办法可以在命令行模式下重建数据库? Oracle 的DBCA提供了-silent(沉默)运行模式,使用这种模式创建数据库,不需要数据库服务器支持图形界面,步骤也相对简单一些,只需要编辑一个响应文件,然后运行DBCA命令。1 环境准备 使用DBCA创建数据库前,需要做一下准备,获取响应文件的模板,搜集一下响应文件需要的信息。 DBCA命令位于数据库ORACLE_HOME目录下bin目录中,一般这个目录会在shell变量的PATH路径中,可以直接运行这个命令,不需要切换到ORACLE_HOME目录下bin目录运行。 响应文件模板可以从网上找一个下载下来,也可以使用Oracle安装包中的响应文件模板。Oracle数据库安装完成后默认也安装了dbca的响应文件模板,这个模板在下面的目录中。 [oracle@iZ2ze0t8khaprrpfvmevjiZ dbca]$ pwd /opt/oracle/product/21c/dbhome_1/assistants/dbca 这个目录中有一个响应文件的样本,可以根据自己的需要修改。要想启动dbca命令,虽然是命令行模式,也需要设置操作系统环境变量DISPLAY,可以设置成下面的值[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ export DISPLAY=.0:0 运行DBCA另一个要注意的是服务器etc目录下hosts文件中主机名不能解析为本地环回地址,否则,命令运行会报错,这个好像与java有关。2 编辑响应文件 DBCA的响应文件中大部分参数都是不是强制的,可以采用默认设置如下图所示:[oracle@iZ2ze0t8khaprrpfvmevjiZ dbca]$ grep "# Mandatory : NO" dbca.rsp -A2 # Mandatory : NO #----------------------------------------------------------------------------- characterSet= # Mandatory : NO #----------------------------------------------------------------------------- listeners= # Mandatory : NO #----------------------------------------------------------------------------- variablesFile= # Mandatory : NO #----------------------------------------------------------------------------- variables= # Mandatory : NO #----------------------------------------------------------------------------- initParams= # Mandatory : NO #----------------------------------------------------------------------------- memoryPercentage= # Mandatory : NO #----------------------------------------------------------------------------- databaseType= # Mandatory : NO #----------------------------------------------------------------------------- automaticMemoryManagement= # Mandatory : NO #----------------------------------------------------------------------------- totalMemory= [oracle@iZ2ze0t8khaprrpfvmevjiZ dbca]$ grep "# Mandatory : YES" dbca.rsp -A2 # Mandatory : YES, if the value of registerWithDirService is TRUE #----------------------------------------------------------------------------- dirServiceUserName= # Mandatory : YES, if the value of registerWithDirService is TRUE #----------------------------------------------------------------------------- dirServicePassword= # Mandatory : YES, if the value of registerWithDirService is TRUE #----------------------------------------------------------------------------- walletPassword= [oracle@iZ2ze0t8khaprrpfvmevjiZ dbca]$ grep "# Mandatory : NO" dbca.rsp -A2 # Mandatory : NO #----------------------------------------------------------------------------- characterSet= # Mandatory : NO #----------------------------------------------------------------------------- listeners= # Mandatory : NO #----------------------------------------------------------------------------- variablesFile= # Mandatory : NO #----------------------------------------------------------------------------- variables= # Mandatory : NO #----------------------------------------------------------------------------- initParams= # Mandatory : NO #----------------------------------------------------------------------------- memoryPercentage= # Mandatory : NO #----------------------------------------------------------------------------- databaseType= # Mandatory : NO #----------------------------------------------------------------------------- automaticMemoryManagement= # Mandatory : NO #----------------------------------------------------------------------------- totalMemory= 在选择某些项后会有一些强制的选项[oracle@iZ2ze0t8khaprrpfvmevjiZ dbca]$ grep "# Mandatory : YES" dbca.rsp -A2 # Mandatory : YES, if the value of registerWithDirService is TRUE #----------------------------------------------------------------------------- dirServiceUserName= # Mandatory : YES, if the value of registerWithDirService is TRUE #----------------------------------------------------------------------------- dirServicePassword= # Mandatory : YES, if the value of registerWithDirService is TRUE #----------------------------------------------------------------------------- walletPassword= 在创建数据库,大部分情况下要支持中文,不能采用默认的字符集,数据文件的存放目录通常也不会采用默认的设置,本次创建数据库用的响应文件是下面这个responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v21.0.0 gdbName=orcl sid=orcl sysPassword=sys123 systemPassword=sys123 emExpressPort=5500 templateName=/opt/oracle/product/21c/dbhome_1/assistants/dbca/templates/General_Purpose.dbc characterSet=AL32UTF8 可以看到这个响应文件很简单,这个响应文件中,第一行是强制的,不能更改,templateName也是必须的,如果没有,创建数据库的过程中会报错退出。3 创建数据库 有了响应文件,就可以创建数据库了[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ dbca -silent -createDatabase -responseFile ./dbca.rsp [WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. [WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards. CAUSE: a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9]. b.The password entered is a keyword that Oracle does not recommend to be used as password ACTION: Specify a strong password. If required refer Oracle documentation for guidelines. Prepare for db operation 10% complete Copying database files 40% complete Creating and starting Oracle instance 42% complete 46% complete 52% complete 56% complete 60% complete Completing Database Creation 66% complete Executing Post Configuration Actions 100% complete Database creation complete. For details check the logfiles at: /opt/oracle/cfgtoollogs/dbca/ORCL. 数据库创建成功了,创建过程中的日志存储在/opt/oracle/cfgtoollogs/dbca/ORCL目录中。4 创建数据库后的工作4.1 登录数据库查看可插拔数据库信息 SQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ WRITE YES 可以看到,dbca没有创建可插拔数据库,这个和响应文件的默认值有关# Name : numberOfPDBs # Datatype : Number # Description : Specify the number of pdb to be created # Valid values : 0 to 4094 # Default value : 0 numberOfPDBs= #----------------------------------------------------------------------------- # Name : pdbName # Datatype : String # Description : Specify the pdbname/pdbanme prefix if one or more pdb need to be created # Valid values : Check Oracle21c Administrator's Guide # Default value : None pdbName= 从上面响应文件的部分内容可以看出,默认的pdb数量是0.也就是不创建可插拔数据库。4.2 创建可插拔数据库 创建可插拔数据库之前,需要创建可插拔数据库数据文件的目录,SQL> ! mkdir /opt/oracle/oradata/ORCL/pdb1 SQL> ! ls -ld /opt/oracle/oradata/ORCL/pdb1 drwxr-xr-x 2 oracle oinstall 6 Aug 19 09:09 /opt/oracle/oradata/ORCL/pdb1 使用下面命令创建可插拔数据库,这个命令应在根容器数据库中运行SQL> create pluggable database pdb1 admin user pdb1_admin identified by oracle roles=(connect) file_name_convert=('/opt/oracle/oradata/ORCL/pdbseed','/opt/oracle/oradata/ORCL/pdb1'); Pluggable database created. 创建完毕后查看启动创建的pdbSQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ WRITE YES 3 PDB1 MOUNTED SQL> alter pluggable database PDB1 open; Pluggable database altered. SQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ WRITE YES 3 PDB1 READ WRITE NO 切换当前容易的会话为pdb1,创建表空间和用户,给用户授权SQL> alter session set container=PDB1; Session altered. --查看一下pdb1的数据文件 SQL> select file_name from dba_data_files; FILE_NAME -------------------------------------------------------------------------------- /opt/oracle/oradata/ORCL/pdb1/undotbs01.dbf /opt/oracle/oradata/ORCL/pdb1/sysaux01.dbf /opt/oracle/oradata/ORCL/pdb1/system01.dbf --创建一个表空间 SQL> create tablespace tbs_test datafile '/opt/oracle/oradata/ORCL/pdb1/test01.dbf' size 100M; Tablespace created. --创建用户,默认表空间设置为刚在创建的表空间 SQL> create user test default tablespace tbs_test identified by "test123"; User created. --给用户授予资源和连接权限 SQL> grant connect,resource to test; Grant succeeded.4.3 登录可插拔数据库查看监听信息[oracle@iZ2ze0t8khaprrpfvmevjiZ orcl]$ lsnrctl status LSNRCTL for Linux: Version 21.0.0.0.0 - Production on 19-AUG-2022 09:24:31 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=iZ2ze0t8khaprrpfvmevjiZ)(PORT=1521))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 21.0.0.0.0 - Production Start Date 17-AUG-2022 13:40:13 Uptime 1 days 19 hr. 44 min. 17 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /opt/oracle/homes/OraDBHome21cEE/network/admin/listener.ora Listener Log File /opt/oracle/diag/tnslsnr/iZ2ze0t8khaprrpfvmevjiZ/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=iZ2ze0t8khaprrpfvmevjiZ)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521))) Services Summary... Service "c8209f27c6b16005e053362ee80ae60e" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "e68ecedbc9564717e053f40b14acf0d9" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orcl" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... Service "pdb1" has 1 instance(s). Instance "orcl", status READY, has 1 handler(s) for this service... The command completed successfullyOracle已经创建了pdb1 的服务,用这个服务可以登录到pdb1可插拔数据库.[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ sqlplus test/test123@iZ2ze0t8khaprrpfvmevjiZ/pdb1

Oracle--运行过程中当前redo日志文件损坏会发生什么

redo日志文件是Oracle数据库中最重要的文件之下,在创建数据库时,通常要求创建多个在线日志组,每个日志组至少包括两个成员,也就是说,每个日志组至少有两个日志文件,这两个日志文件存储的内容是相同的,并且这两个日志文件最好位于不同的磁盘上。即使在使用磁盘阵列做存储时,一般也是每个日志组配置两个日志文件。这说明了redo日志文件十分重要,如果损坏可能会造成不可挽回的损失。这里用实验模拟一下数据库正在事务为提交时当前redo文件损坏会发生什么,先后模拟两种情况下的损坏,一种是用rm删除redo文件,一种是用dd命令清空redo文件,看看这两种情况下都会发生什么。1 实验环境 本次实验用的是Oracle数据库最新的21c版本,详细的版本信息可以从视图v$version中查到。 SQL> select BANNER_FULL from v$version; BANNER_FULL ------------------------------------------------------ Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 看一下数据库中redo日志组的配置SQL> select GROUP# , bytes,status from v$log; GROUP# BYTES STATUS ---------- ---------- ---------------- 1 209715200 CURRENT 2 209715200 INACTIVE 3 209715200 INACTIVE 数据库有三个日志组,日志组1是当前日志组。然后再看一下每个日志组的成员。SQL> l 1* select GROUP#,STATUS,MEMBER from v$logfile SQL> c /,status/ 1* select GROUP#,MEMBER from v$logfile SQL> / GROUP# MEMBER ---------- ---------------------------------------------------------------- 3 /opt/oracle/oradata/ORCLCDB/redo03.log 2 /opt/oracle/oradata/ORCLCDB/redo02.log 1 /opt/oracle/oradata/ORCLCDB/redo01.log 每个日志组只有一个日志文件,当前日志组1中的日志文件是/opt/oracle/oradata/ORCLCDB/redo01.logSQL> alter session set container=ORCLPDB1; Session altered. SQL> select GROUP#,MEMBER from v$logfile; GROUP# MEMBER ---------- ---------------------------------------------------------------- 3 /opt/oracle/oradata/ORCLCDB/redo03.log 2 /opt/oracle/oradata/ORCLCDB/redo02.log 1 /opt/oracle/oradata/ORCLCDB/redo01.log 可插拔数据库pdb1的日志组和容器数据库cdb中是一样的,它们使用同一个日志组。2 模拟误删当前日志文件组中所有日志文件场景 另开一个会话,在这个会话里运行事务,这个会话直接登录到可插拔数据库pdb1,使用用户test,不使用sys或system用户,以模拟真实的数据库场景。本文在以后就称这个会话为用户会话,前面的以sysdba身份登录的会话就称为管理会话。[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ sqlplus test/test123@iZ2ze0t8khaprrpfvmevjiZ/orclpdb1 登录会话之后,检查会话的自动提交是否关闭SQL> show autoc; autocommit OFF 自动提交已关闭,如未关闭,用set autocommit off 命令关闭,看一下本次实验的数据库表SQL> select * from test2; ID NAME SALARY ---------- -------------------- ---------- 1 zhangsan 473 2 lisi 473 3 wangwu 473 SQL> select * from test; ID NAME SALARY ---------- -------------------- ---------- 1 zhangsan 601 2 lisi 474 3 wangwu 474运行两条DML(数据操作语句)SQL> update test2 set salary=500 where id=1; 1 row updated. SQL> delete from test where id=1; 1 row deleted. 执行到这里,数据库系统里现在已经有了未提交事务,下面开始模仿误删当前redo文件,用rm命令删除redo文件。切换到管理会话 SQL> host 使用host命令切换到操作系统shell下[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ ls -l /opt/oracle/oradata/ORCLCDB/redo01.log -rw-r----- 1 oracle oinstall 209715712 Aug 18 08:59 /opt/oracle/oradata/ORCLCDB/redo01.log [oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ rm /opt/oracle/oradata/ORCLCDB/redo01.log [oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ ls -l /opt/oracle/oradata/ORCLCDB/redo01.log ls: cannot access '/opt/oracle/oradata/ORCLCDB/redo01.log': No such file or directory [oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ exit exit 上面的命令先查看一下当前redo日志组的唯一成员/opt/oracle/oradata/ORCLCDB/redo01.log,用rm命令删除后,再次查看,确认删除后,退出shell进入sqlplus下。SQL> select GROUP# , bytes,status from v$log; GROUP# BYTES STATUS ---------- ---------- ---------------- 1 209715200 CURRENT 2 209715200 INACTIVE 3 209715200 INACTIVE当前的日志组id仍然是1,检查一下数据库告警日志,并无和本次删除相关的错误信息[oracle@iZ2ze0t8khaprrpfvmevjiZ trace]$ pwd /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace [oracle@iZ2ze0t8khaprrpfvmevjiZ trace]$ tail -10 alert_ORCLCDB.log ORCLPDB1(3):Clearing Resource Manager plan via parameter 2022-08-18T08:53:55.916914+08:00 ORCLPDB1(3):Setting Resource Manager plan SCHEDULER[0x52BD]:DEFAULT_MAINTENANCE_PLAN via scheduler window ORCLPDB1(3):Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter 2022-08-18T08:59:38.906285+08:00 TABLE SYS.WRP$_REPORTS: ADDED INTERVAL PARTITION SYS_P4541 (4613) VALUES LESS THAN (TO_DATE(' 2022-08-19 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLE SYS.WRP$_REPORTS_DETAILS: ADDED INTERVAL PARTITION SYS_P4542 (4613) VALUES LESS THAN (TO_DATE(' 2022-08-19 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLE SYS.WRP$_REPORTS_TIME_BANDS: ADDED INTERVAL PARTITION SYS_P4545 (4612) VALUES LESS THAN (TO_DATE(' 2022-08-18 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) 2022-08-18T09:01:17.925156+08:00 ORCLPDB1(3):TABLE SYS.ACTIVITY_TABLE$: ADDED INTERVAL PARTITION SYS_P3413 (110) VALUES LESS THAN (10570)到用户会话中进行操作SQL> commit; Commit complete.可以看到,事务成功提交。回到管理会话SQL> alter system switch logfile; SQL> select GROUP# , bytes,status from v$log; GROUP# BYTES STATUS ---------- ---------- ---------------- 1 209715200 ACTIVE 2 209715200 CURRENT 3 209715200 INACTIVE 切换一下日志文件,当前redo日志组id为2,日志组1 状态时活跃。SQL> alter database clear logfile group 1; alter database clear logfile group 1 ERROR at line 1: ORA-01624: log 1 needed for crash recovery of instance ORCLCDB (thread 1) ORA-00312: online log 1 thread 1: '/opt/oracle/oradata/ORCLCDB/redo01.log'清除并重建日志组1,命令操作失败,因为日志组1是崩溃恢复需要的,用clear unarchived logfile命令清除重建日志组SQL> alter database clear unarchived logfile group 1; Database altered.命令运行成功了SQL> !ls -l /opt/oracle/oradata/ORCLCDB/redo01.log -rw-r----- 1 oracle oinstall 209715712 Aug 18 09:16 /opt/oracle/oradata/ORCLCDB/redo01.log检查一下日志文件,发现日志文件已经重建。3 模拟日志文件本身故障 用dd命令覆盖清除在线日志文件模拟日志文件故障,切换到用户会话,重开一个事务SQL> update test2 set salary=500 where id=1; 1 row updated. SQL> insert into test values (1,'zhangsan', 1500); 1 row created.切换到管理会话,查看一下当前日志组及其成员SQL> select GROUP# , bytes,status from v$log; GROUP# BYTES STATUS ---------- ---------- ---------------- 1 209715200 UNUSED 2 209715200 CURRENT 3 209715200 UNUSED SQL> select GROUP#,MEMBER from v$logfile; GROUP# MEMBER ---------- -------------------------------------------------------------------------------- 3 /opt/oracle/oradata/ORCLCDB/redo03.log 2 /opt/oracle/oradata/ORCLCDB/redo02.log 1 /opt/oracle/oradata/ORCLCDB/redo01.log 当前的日志组id为2,其唯一的成员文件为/opt/oracle/oradata/ORCLCDB/redo02.log切换到root用户下清空此文件[root@ ~]# dd if=/dev/null of=/opt/oracle/oradata/ORCLCDB/redo02.log 0+0 records in 0+0 records out 0 bytes copied, 5.028e-05 s, 0.0 kB/s [root@ ~]# ls -l /opt/oracle/oradata/ORCLCDB/redo02.log -rw-r--r-- 1 root root 0 Aug 18 09:40 /opt/oracle/oradata/ORCLCDB/redo02.log文件已被清空,其字节数为0。切换到用户会话,提交事务SQL> commit; Commit complete.事务仍能提交成功。再次回到管理会话,检查告警日志,没有错误信息[oracle@iZ2ze0t8khaprrpfvmevjiZ trace]$ ls -l /opt/oracle/oradata/ORCLCDB/redo02.log -rw-r--r-- 1 root root 0 Aug 18 09:40 /opt/oracle/oradata/ORCLCDB/redo02.log当前redo文件的大小仍为0[oracle@iZ2ze0t8khaprrpfvmevjiZ fd]$ ps -ef|grep lgwr|grep -v grep oracle 9063 1 0 Aug17 ? 00:00:06 ora_lgwr_ORCLCDB [oracle@iZ2ze0t8khaprrpfvmevjiZ fd]$ cd /proc/9063/fd [oracle@iZ2ze0t8khaprrpfvmevjiZ fd]$ ls -l|grep redo lrwx------ 1 oracle oinstall 64 Aug 18 09:51 258 -> /opt/oracle/oradata/ORCLCDB/redo01.log lrwx------ 1 oracle oinstall 64 Aug 18 09:51 259 -> /opt/oracle/oradata/ORCLCDB/redo02.log (deleted) lrwx------ 1 oracle oinstall 64 Aug 18 09:51 260 -> /opt/oracle/oradata/ORCLCDB/redo03.log查看redo日志写进程打开的文件,redo02.log为已删除。SQL> select GROUP# , bytes,status from v$log; GROUP# BYTES STATUS ---------- ---------- ---------------- 1 209715200 UNUSED 2 209715200 CURRENT 3 209715200 UNUSED 当前的redo日志组仍然为2.切换一下日志组SQL> alter system switch logfile; alter system switch logfile ERROR at line 1: ORA-03113: end-of-file on communication channel Process ID: 12168 Session ID: 3 Serial number: 11933会话退出了,查看一下数据库的告警日志,发现数据库崩溃了[oracle@iZ2ze0t8khaprrpfvmevjiZ trace]$ tail -30 alert_ORCLCDB.log ORA-27037: unable to obtain file status Linux-x86_64 Error: 2: No such file or directory Additional information: 7 2022-08-18T09:16:44.462008+08:00 Completed: alter database clear unarchived logfile group 1 2022-08-18T09:53:25.915074+08:00 Errors in file /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_lgwr_9063.trc: ORA-00316: log 2 of thread 1, type 0 in header is not log file ORA-00312: online log 2 thread 1: '/opt/oracle/oradata/ORCLCDB/redo02.log' 2022-08-18T09:53:25.915417+08:00 Errors in file /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_lgwr_9063.trc: ORA-00316: log 2 of thread 1, type 0 in header is not log file ORA-00312: online log 2 thread 1: '/opt/oracle/oradata/ORCLCDB/redo02.log' Errors in file /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_lgwr_9063.trc (incident=16994) (PDBNAME=CDB$ROOT): ORA-316 [] [] [] [] [] [] [] [] [] [] [] [] Incident details in: /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/incident/incdir_16994/ORCLCDB_lgwr_9063_i16994.trc 2022-08-18T09:53:26.136046+08:00 USER (ospid: 12168): terminating the instance due to ORA error 316 2022-08-18T09:53:26.136802+08:00 Cause - 'Instance is being terminated due to fatal process LGWR is terminating with error 316' Memory (Avail / Total) = 26.98M / 1816.86M Swap (Avail / Total) = 0.00M / 0.00M 2022-08-18T09:53:26.337603+08:00 System state dump requested by (instance=1, osid=12168), summary=[abnormal instance termination]. System State dumped to trace file /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_diag_9042.trc 2022-08-18T09:53:28.696702+08:00 Dumping diagnostic data in directory=[cdmp_20220818095326], requested by (instance=1, osid=12168), summary=[abnormal ins tance termination]. 2022-08-18T09:53:32.210921+08:00 Instance terminated by USER, pid = 12168 数据库实例因为ORA error 316而终结,这个错误就 是/opt/oracle/oradata/ORCLCDB/redo02.log被清空所致。4 分析和结论 从这两个简单的模拟可以看出,如果是误删了当前日志文件,只要及时发现还可以恢复的,这时不必关闭数据库,直接切换日志,清除重建日志组即可。这和linux系统删除文件的原理有关,在使用rm命令删除文件时,操作系统并不对文件本身进行操作,只是把文件名从父目录条目中删除,数据库的lgwr进程仍然可以通过已打开的文件句柄对文件进行操作,所以可以成功的切换、清除、重建日志组,这时关闭了数据库恢复起来所需做的工作更多。 如果时redo文件本身故障或者被清空,lgwr进程不能访问日志文件,不能完成日志文件的切换,这时会报316错误,导致实例被终结。

Navicat数据库管理设计工具--数据传输

Navicat是常用的数据库管理和开发工具,尤其是开发人员,常常喜欢用Navicat做数据库的管理和设计开发,数据库的导入导出。Navicat图形化的操作方式在很大程度上简化了数据库的维护和开发,尤其是在数据维护方面,Navicat提供了很多工具减轻这些工作的负担。这里介绍一下Navicat的数据传输功能,使用这个功能可以很方便的在数据库时间传输数据。 在不同的数据库之间传输数据,在数据维护人员经常要做的事情,如果手头上没有工具可用,一般是采用命令行导入导出的方式,如果是异构数据库,可能要在源数据库上将表导出为特定格式的文本文件,然后再在目标数据库上导入。这个操作有时会变得十分复杂,一般很少采用,大部分有经验的dba或开发人员会借助与第三方工具来做,比如Oracle上常用的PLSQL developer,Navicat也提供数据的导入导出功能。 使用Navicat的数据传输功能,可以在数据库之间传输表数据,避免了导入导出的复杂过程。简单的演示一下。 1 使用Navicat连接至源和目标数据库 使用Navicat连接到mysql数据库非常简单,打开新建连接,选择myslq数据库,在上面的图中填入主机ip、端口、用户名和密码,点击测试连接,连接成功后确认即可。 成功连接到源和目标数据库之后,左边的导航栏内可以看到已连接到的数据库,本次演示源数据库是mysqlECS,目标数据库是mysqlLocal。2 数据传输MySQL至MySQL 点击工具下拉菜单后点击数据传输,出现以下界面: 在这个图中选择源实例,源数据库,目标实例,目标数据库,选择后可以看到源和目标数据库的详细信息。点击下一步。从上面的图上可以看到可以传输的对象,表、函数、视图、时间都是可以传输的,可以选择全部,传输选择的全部对象,也可以选择自定义,选择一个或几个对象进行传输。这里选择传输一个表,进入下一步。进入摘要界面点击开始进行数据传输。 上图显示了数据传输的过程,可以看到,Navicat先从源库获取数据,这个步骤分两步,第一步是获取表结构,然后是获取表的记录,然后再目标数据上删除目标表后再创建目标表,表名同源库相同,最后传输表记录。3 数据传输MySQL-Oracle Navicat也支持异构数据库时间的数据传输,比如MySQL-Oracle之间目标数据库选择Oracle,点击下一步这里只能传输表,点击下一步进入摘要界面点击开始进行数据传输可能是需要转换的缘故,异构数据库的传输速度要慢一些。4 数据传输的高级功能 Navicat的数据传输也支持一些高级的功能,可以自己选择目标表的名称,自定义原表到目标表列的映射,如果传输全部行,可以定义每次传输行的数量,也可以自定义要传输的记录集,设置条件选择。这些功能都十分实用。

AnalyticDB MySQL-表和索引与MySQL的差异

AnalyticDB MySQL版是阿里云的海量数据实时高并发在线分析数据仓库,语法完全兼容MySQL,在使用AnalyticDB MySQL时表面上就和使用MySQL一样,建表时也可以使用MySQL的建表语句。但是,这个只是表面现象, AnalyticDB MySQL在技术架构上和MySQL完全不同,在表和索引的使用上也和MySQL差异较大,如果忽视这些差异,将它当作MySQL来使用,可能会较大为问题。1 AnalyticDB MySQL中的表 AnalyticDB MySQL版支持复制表和普通表两种类型。复制表也可以称为广播表,会在集群的每个节点存储一份数据,对应与数据仓库中的维度表,因此复制表的数据量不宜太大,阿里的建议每张复制表存储的数据不超过2万行。 AnalyticDB MySQL的普通表和MySQL数据库的普通表的概念不同,在AnalyticDB MySQL里,普通表实际上是分片表,AnalyticDB MySQL的建表语法同MySQL兼容,如果使用MySQL的建表语法创建了一张表,在表有主键的情况下,AnalyticDB MySQL会以主键作为分片键,如果没有主键,如果MySQL表不含有主键,AnalyticDB MySQL版将添加一个__adb_auto_id__字段作为主键和分布键。 假设用户用下面的MySQL的建表语句在AnalyticDB MySQL建了一张表CREATE TABLE t (c1 bigint, c2 int, c3 varchar, PRIMARY KEY(c1,c2)); Query OK, 0 rows affected (2.37 sec) 在AnalyticDB MySQL中查看表的定义是下面这个样子SHOW CREATE TABLE t; +-------+-------------------------------------------------------------------------------------------------------------------------------+ | Table | Create Table | +-------+-------------------------------------------------------------------------------------------------------------------------------+ | t | Create Table `t` ( `c1` bigint, `c2` int, `c3` varchar, primary key (c1,c2) ) DISTRIBUTED BY HASH(`c1`,`c2`) INDEX_ALL='Y' | +-------+-------------------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.04 sec) 从上面的建表语句可以看到,在AnalyticDB MySQL中,分片(使用分布键)是必须的,强制的,除非使用复制表,分区则是用户自由定制的。AnalyticDB MySQL中,分布采用的是哈希算法,每张表只支持一个分布键,这个是显而易见的,因为AnalyticDB MySQL使用分布键在集群内的节点上分布数据,在实际分布数据时,只能选择一种分布方式,多个分布键毫无意义。在选择分布键时,不建议选择日期、时间和时间戳类型,而应该选择交易ID、设备ID、用户ID(具有业务特征的),如果此类的键,可以选择自增键。这样选择的目的是将可能利用AnalyticDB MySQL的节点进行分布是计算,举一个简单的例子,我们要查询一批用户的一段时间内数据,首先想到的是查询是以每个用户为单位来区分的,我们希望把每个用户的查询分布到不同的节点上,而不是希望每个时间段的数据的查询分布到不同的节点上。在分布键的选择上,还要注意优先选择查询频率高的、join的键作为分布键。    另一个值得注意的地方是,在AnalyticDB MySQL中,表的分片数是自动决定的,用户不可以手动设置一个表的分片数,一个表的具体分片数是由数据库的版本和集群的配置决定的,配置越高,表的分片树越多。     在分布的基础上,可以再对表进行分区,当一个分片的数据库太大影响查询性能时,就需要对分片进行分区。这里的分区类似mysql数据库中范围分区,最常见的是时间作为分区键。 分区的另外的好处是可以利用AnalyticDB MySQL的LIFECYCLE N特定以及冷热特性,超过生命周期的分区会被自动删除,也可以设置冷热分区的比例。二级分区的数据应该维持静态,数据不应在二级分区之间内移动,如果二级分区频繁更新,可能是分区键选择的不合理。   表的主键,主键可以作为每一条记录的唯一标识。在创建表时,通过PRIMARY KEY来定义主键,在AnalyticDB MySQL中,只用定义了主键表才支持数据更新操作,包括更新和删除。 主键可以是单个字段或多个字段的组合,同时主键必须包含分布键和分区键,考虑到性能,分布键和分区键尽可能放到主键的前部。      在表上也可以创建聚集索引,每个表仅仅支持创建一个聚集索引。      下面用一个官网上的例子说明一下, CREATE TABLE customer ( customer_id bigint NOT NULL COMMENT '顾客ID', customer_name varchar NOT NULL COMMENT '顾客姓名', phone_num bigint NOT NULL COMMENT '电话', city_name varchar NOT NULL COMMENT '所属城市', sex int NOT NULL COMMENT '性别', id_number varchar NOT NULL COMMENT '身份证号码', home_address varchar NOT NULL COMMENT '家庭住址', office_address varchar NOT NULL COMMENT '办公地址', age int NOT NULL COMMENT '年龄', login_time timestamp NOT NULL COMMENT '登录时间', PRIMARY KEY (login_time, customer_id, phone_num) DISTRIBUTED BY HASH(customer_id) PARTITION BY VALUE(DATE_FORMAT(login_time, '%Y%m%d')) LIFECYCLE 30 COMMENT '客户信息表'; 表customer采用customer_id作为分布键,以login_time作为分区键,主键必须包含分区键和分布键,这里选择login_time, customer_id, phone_num,分布键和分区键尽可能放到主键的前部和分布键放在主键靠前的部分,分区以天为单位,同时设置了LIFECYCLE特性,超过30天的分区会被自动删除。2 自适应索引 AnalyticDB MySQL的玄武存储引擎采用了自适应列级自动索引技术,针对字符串、数字、文本、JSON、向量等列类型都会自动创建索引,在数据库中创建一张表后用show indexes from <表名>查询,    可以看到每个列上都有一个索引,这些索引是AnalyticDB MySQL自动创建的,AnalyticDB MySQL可以做到列级索引任意维度组合检索、多路渐进流式归并,大幅提升了数据过滤性

mysql慢查询日志解析

1 打开及设置MySQL的慢查询日志 慢查询日志是MySQL中常用的性能诊断工具,通过慢查询日志把执行时间超过一定时间限制(默认10秒)的sql语句记录下来,通过收集一个时间段内(数据库出现性能问题)性能运行欠佳的sql语句,进行统一的分析和诊断,是DBA解决MySQL数据库常用的方法之一。 慢查询日志默认是关闭的。mysql> show variables like '%slow_query%'; +---------------------------+---------------------------------------------+ | Variable_name | Value | +---------------------------+---------------------------------------------+ | | slow_query_log | OFF | | slow_query_log_file | /mysqldata/iZ2ze0t8khaprrpfvmevjiZ-slow.log | +---------------------------+---------------------------------------------+ 5 rows in set (0.01 sec) 要想打开慢查询日志,可以编辑MySQL数据库配置文件,在[mysqld]部分 加入slow_query_log=on,重启数据库。如果不想重启数据库,可以登录数据库直接设置slow_query_log全局变量。mysql> set global slow_query_log=on; Query OK, 0 rows affected (0.14 sec) 这样设置对于设置后新建的连接有效,对于当前已连接会话是无效的,退出当前会话后重新登录数据库mysql> show global variables like '%slow_query%'; +---------------------------+---------------------------------------------+ | Variable_name | Value | | slow_query_log | ON | | slow_query_log_file | /mysqldata/iZ2ze0t8khaprrpfvmevjiZ-slow.log | +---------------------------+---------------------------------------------+ 慢查询日志已经打开,捕获的慢查询日志存储在文件 /mysqldata/iZ2ze0t8khaprrpfvmevjiZ-slow.log中,这个文件是数据库的默认设置,/mysqldataMySQL的数据目录,iZ2ze0t8khaprrpfvmevjiZ是数据库服务器的主机名。 慢查询日志有2个常用的选项需要调整,一个是long_query_time,设置多长时间之上的查询需要捕获,另一个是log_queries_not_using_indexes,这个选项默认是off,如果设置为on,会捕获所有没有使用索引的sql语句。这一点对于DBA来说很有用,众所周知的是MySQL只支持嵌套连接,而索引对于嵌套连接有十分重要,如果能一次性获取数据库中所有没有使用索引的sql给DBA提供了很大的方便。 mysql> set global log_queries_not_using_indexes=on; Query OK, 0 rows affected (0.00 sec) mysql> set long_query_time=0.1; Query OK, 0 rows affected (0.00 sec)2 用sysbench(docker)方式创建测试数据 使用docker运行sysbench的原因一个是安装配置十分简单,不用在操作系统安装sysbench以及依赖包,只要拉取镜像,运行容器即可,另一个是这样使用对现有的操作系统环境几乎没什么影响,要删除也十分方便,只要把容器删除即可。2.1 安装docker docker的安装十分简单,这里只列出步骤[root]# yum install -y yum-utils [root]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo Adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root]# yum makecache fast && yum -y install docker-ce [root@ ~]# systemctl start docker [root@ ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Mon 2022-08-15 09:51:47 CST; 8s ago2.2 拉取sysbench映像 搜索sysbench映像[root@ ~]# docker search sysbench NAME DESCRIPTION STARS OFFICIAL AUTOMATED severalnines/sysbench Sysbench 1.0 18 [OK] 拉取搜到的第一个映像[root@ ~]# docker pull severalnines/sysbench Using default tag: latest latest: Pulling from severalnines/sysbench e79bb959ec00: Pull complete 6f6da6c9c901: Pull complete 206bc115af8c: Pull complete ba6ac542f035: Pull complete 1f1dcb85e92e: Pull complete Digest: sha256:64cd003bfa21eaab22f985e7b95f90d21a970229f5f628718657dd1bae669abd Status: Downloaded newer image for severalnines/sysbench:latest docker.io/severalnines/sysbench:latest 映像拉取成功,查看以下本地映像[root@ ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE severalnines/sysbench latest 0e71335a2211 3 years ago 429MB2.3 创建sysbench数据库及账号 本地已经有sysbench映像了,在MySQL中创建一个sysbench数据库及用来连接这个数据库的账号和密码mysql> create database sysbenchdb; Query OK, 1 row affected (0.00 sec) mysql> grant all privileges on sysbenchdb.* to u_sysbench@'%' identified by '123456'; Query OK, 0 rows affected, 1 warning (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) 创建账号密码后不要忘了刷新权限,否则会登录不到数据库。2.4 创建测试数据 使用下面命令创建测试数据。[root@ ~]# docker run --rm=true --name=sb-prepare 0e71335a2211 sysbench --test=/usr/share/sysbench/oltp_read_only.lua --mysql-host=172.20.11.244 --mysql-port=3306 --mysql-db=sysbenchdb --mysql-user="u_sysbench" --mysql-password='123456' --tables=4 --table_size=10000000 --threads=2 --time=300 --report-interval=3 --db-driver=mysql --db-ps-mode=disable --skip-trx=on --mysql-ignore-errors=6002,6004,4012,2013,4016,1062 prepare 这个命令很长,看起来稍稍有点复杂,这了略微解释以下,除了第一行之外,这个命令就是sysbench在操作系统下直接安装时创建测试数据的命令。第一行是用docker来运行sysbench,--name设置容器的名称,--rm=true是在创建测试数据完成后删除容器,这是个一次性的任务,完成了之后容器的使命也就结束了,留着也没啥用。0e71335a2211是我们拉取的sysbench映像的id,这个也可以用名称,后面跟的就是这个容器启动后要运行的命令。 --test是要运行的sysbench的脚本,后面几个是连接到数据库的信息,因为是sysbench是运行在docker中,这里不能用本地地址(127.0.0.1或者localhost)作为主机名,后面是表数量、线程数、表大小,时间,报道间隔,数据库驱动器、事务选项、要忽略的错误等,最后的prepare是sysbench要运行的命令。这里因为是测试慢查询,表设得大一点。这个命令的输出是这样的。 WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options. sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2) Initializing worker threads... Creating table 'sbtest2'... Creating table 'sbtest1'... Inserting 10000000 records into 'sbtest1' Inserting 10000000 records into 'sbtest2' Creating a secondary index on 'sbtest1'... Creating a secondary index on 'sbtest2'... Creating table 'sbtest3'... Inserting 10000000 records into 'sbtest3' Creating table 'sbtest4'... Inserting 10000000 records into 'sbtest4' Creating a secondary index on 'sbtest3'... Creating a secondary index on 'sbtest4'... 不但创建了表,也创建了索引。2.5 运行sysbench测试[root@ ~]# docker run --name=sb-run 0e71335a2211 sysbench --test=/usr/share/sysbench/oltp_read_only.lua --mysql-host=172.20.11.244 --mysql-port=3306 --mysql-db=sysbenchdb --mysql-user="u_sysbench" --mysql-password=123456 --tables=4 --table_size=10000000 --threads=4 --time=300 --report-interval=10 --db-driver=mysql --db-ps-mode=disable --skip-trx=on --mysql-ignore-errors=6002,6004,4012,2013,4016,1062 run 这个命令和创建测试数据的命令大部分是相同的,第一行没有了--rm=true,这个测试以后可能会还要做,所以运行完之后保留这个容器。sysbench运行的命令是run,不是prepare。这个命令的输出如下: SQL statistics: queries performed: read: 1477406 write: 0 other: 0 total: 1477406 transactions: 105529 (351.72 per sec.) queries: 1477406 (4924.08 per sec.) ignored errors: 0 (0.00 per sec.) reconnects: 0 (0.00 per sec.) General statistics: total time: 300.0350s total number of events: 105529 Latency (ms): min: 2.22 avg: 11.37 max: 349.88 95th percentile: 15.00 sum: 1199812.63 Threads fairness: events (avg/stddev): 26382.2500/23.23 execution time (avg/stddev): 299.9532/0.013 MySQL慢查询日志分析3.1 使用文本工具 慢查询日志是文本文件,可以直接使用linux文本命令来查看[root@ mysqldata]# cat iZ2ze0t8khaprrpfvmevjiZ-slow.log /usr/local/mysql/bin/mysqld, Version: 5.7.34 (MySQL Community Server (GPL)). started with: Tcp port: 3306 Unix socket: /tmp/mysql.sock Time Id Command Argument /usr/local/mysql/bin/mysqld, Version: 5.7.34 (MySQL Community Server (GPL)). started with: Tcp port: 3306 Unix socket: /tmp/mysql.sock Time Id Command Argument # Time: 2022-08-15T02:46:17.506869Z # User@Host: root[root] @ localhost [] Id: 34 # Query_time: 22.707483 Lock_time: 0.000098 Rows_sent: 400000 Rows_examined: 800000 use sysbenchdb; SET timestamp=1660531577; select * from sbtest1 union select * from sbtest2 union select * from sbtest3 union select * from sbtest4; # Time: 2022-08-15T02:52:41.908776Z # User@Host: root[root] @ localhost [] Id: 37 # Query_time: 24.590315 Lock_time: 0.000109 Rows_sent: 1 Rows_examined: 1200000 SET timestamp=1660531961; select count(*) from (select * from sbtest1 union select * from sbtest2 union select * from sbtest3 union select * from sbtest4) a; 慢查询日志中显示了每一个捕获的sql语句的时间,连接信息,查询时间、锁时间,发送的行数,扫描行数等,可以看,但是不太直观。3.2 使用mysqldumpslow mysqldumpslow是MySQL官方提供的慢查询日志查看工具,随着MySQL数据库一起安装,这个工具对慢查询日志做了一定的格式化,查看起来要方面一些。[root@ mysqldata]# mysqldumpslow -v iZ2ze0t8khaprrpfvmevjiZ-slow.log Reading mysql slow query log from iZ2ze0t8khaprrpfvmevjiZ-slow.log Count: 1 Time=24.59s (24s) Lock=0.00s (0s) Rows=1.0 (1), root[root]@localhost select count(*) from (select * from sbtest1 union select * from sbtest2 union select * from sbtest3 union select * from sbtest4) a Count: 1 Time=22.71s (22s) Lock=0.00s (0s) Rows=400000.0 (400000), root[root]@localhost select * from sbtest1 union select * from sbtest2 union select * from sbtest3 union select * from sbtest4 Count: 1 Time=0.41s (0s) Lock=0.00s (0s) Rows=0.0 (0), root[root]@localhost select pad from sbtest1 where c = "S" 显示的信息看起来更直观了一些,对每个sql的执行次数,查询时间,查询行数做了汇总。4 使用percona的pt-query-digest分析慢查询日志4.1 pt-query-digest的安装 使用pt-query-digest,需要安装percona-toolkit,这个工具有几个依赖包需要安装,这里列出安装命令,不做解释。[root@ mysqldata]# yum -y install perl-IO-Socket-SSL [root@ mysqldata]# yum -y install perl [root@ mysqldata]# yum -y install perl-DBD-MySQL [root@ mysqldata]# yum install perl-Digest-MD5 -y 对于percona-toolkit,最简单的安装方式是yum源安装,首先下载percona-toolkit的官方yum源[root@ mysqldata]# wget percona.com/get/percona-toolkit.rpm percona-toolkit.rpm [ <=> ] 51.08K 203KB/s in 0.3s 2022-08-15 11:20:59 (203 KB/s) - ‘percona-toolkit.rpm’ saved [52310]下载的yum源是一个rpm包,安装一下[root@~]# yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm yum源安装完成之后,就可以安装percona-toolkit工具了[root@ ~]# yum install percona-toolkit Percona Original release/x86_64 YUM repository 1.9 MB/s | 8.7 MB 00:04 Percona Original release/noarch YUM repository 2.8 kB/s | 3.6 kB 00:01 Percona Release release/noarch YUM repository 1.4 kB/s | 1.8 kB 00:01 Dependencies resolved. ======================================================================================================================== Package Architecture Version Repository Size ======================================================================================================================== Installing: percona-toolkit x86_64 3.4.0-3.el8 percona-release-x86_64 20 M Installing dependencies: perl-TermReadKey x86_64 2.37-7.el8 AppStream 40 k perl-Time-HiRes x86_64 4:1.9758-2.el8 AppStream 61 k Installed: percona-toolkit-3.4.0-3.el8.x86_64 perl-TermReadKey-2.37-7.el8.x86_64 perl-Time-HiRes-4:1.9758-2.el8.x86_64 Complete!4.2 分析慢查询日志 pt-query-digest的使用非常简单,只有一个必须的参数,就是要分析的慢查询日志文件名。 这个命令的输出可以分为三个部分,第一部分是对整个慢查询日志作为分析,所有慢查询语句的执行时间,慢查询语句执行的最短时间,最长时间,平均时间,时间的中位值,标准差等,看了这个,对这个时间段的sql语句总体的运行情况就有了基本的了解。[root@ mysqldata]# pt-query-digest iZ2ze0t8khaprrpfvmevjiZ-slow.log # 180ms user time, 30ms system time, 34.11M rss, 99.52M vsz # Current date: Mon Aug 15 14:54:31 2022 # Hostname: iZ2ze0t8khaprrpfvmevjiZ # Files: iZ2ze0t8khaprrpfvmevjiZ-slow.log # Overall: 5 total, 5 unique, 0.02 QPS, 0.59x concurrency ________________ # Time range: 2022-08-15T06:50:43 to 2022-08-15T06:54:04 # Attribute total min max avg 95% stddev median # ============ ======= ======= ======= ======= ======= ======= ======= # Exec time 119s 104ms 91s 24s 88s 34s 189ms # Lock time 50ms 24us 42ms 10ms 42ms 16ms 44us # Rows sent 201 0 100 40.20 97.36 47.53 0.99 # Rows examine 20.40M 100 10.86M 4.08M 10.76M 4.94M 299.03 # Query size 316 43 78 63.20 76.28 12.04 62.76第二部分列出排在前列的慢查询语句,这里列出的语句应该是关注的重点。# Profile # Rank Query ID Response time Calls R/Call V/M # ==== =================================== ============= ===== ======= === # 1 0xEE987693396E1A09E0A9089F9AFA861A 91.0625 76.5% 1 91.0625 0.00 SELECT UNION sbtest? # 2 0xB3461D1C50B98133F6292480060B3436 27.6287 23.2% 1 27.6287 0.00 SELECT sbtest? # MISC 0xMISC 0.4052 0.3% 3 0.1351 0.0 <3 ITEMS>第三部分是每条慢查询语句具体的信息# Query 1: 0 QPS, 0x concurrency, ID 0xEE987693396E1A09E0A9089F9AFA861A at byte 1056 # This item is included in the report because it matches --limit. # Scores: V/M = 0.00 # Time range: all events occurred at 2022-08-15T06:54:04 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 20 1 # Exec time 76 91s 91s 91s 91s 91s 0 91s # Lock time 16 8ms 8ms 8ms 8ms 8ms 0 8ms # Rows sent 0 0 0 0 0 0 0 0 # Rows examine 53 10.86M 10.86M 10.86M 10.86M 10.86M 0 10.86M # Query size 23 74 74 74 74 74 0 74 # String: # Hosts localhost # Users root # Query_time distribution # 1us # 10us # 100us # 1ms # 10ms # 100ms # 1s # 10s+ ################################################################ # Tables # SHOW TABLE STATUS LIKE 'sbtest1'\G # SHOW CREATE TABLE `sbtest1`\G # SHOW TABLE STATUS LIKE 'sbtest2'\G # SHOW CREATE TABLE `sbtest2`\G # EXPLAIN /*!50100 PARTITIONS*/ select count(*) from (select * from sbtest1 union select * from sbtest2) a\G # Query 2: 0 QPS, 0x concurrency, ID 0xB3461D1C50B98133F6292480060B3436 at byte 816 # This item is included in the report because it matches --limit. # Scores: V/M = 0.00 # Time range: all events occurred at 2022-08-15T06:50:56 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 20 1 # Exec time 23 28s 28s 28s 28s 28s 0 28s # Lock time 82 42ms 42ms 42ms 42ms 42ms 0 42ms # Rows sent 0 0 0 0 0 0 0 0 # Rows examine 46 9.54M 9.54M 9.54M 9.54M 9.54M 0 9.54M # Query size 13 43 43 43 43 43 0 43 # String: # Hosts localhost # Users root # Query_time distribution # 1us # 10us # 100us # 1ms # 10ms # 100ms # 1s # 10s+ ################################################################ # Tables # SHOW TABLE STATUS LIKE 'sbtest1'\G # SHOW CREATE TABLE `sbtest1`\G # EXPLAIN /*!50100 PARTITIONS*/ select pad from sbtest1 where c = 'changan'\G 从上面的输出来看,对每一条语句都进行了分析,一眼就可以看到各个指标的最大值、最小值,平均值,均值、查询事件分布等,给性能诊断提供了丰富的信息。

MySQL执行计划--show warnings

MySQL的的执行计划解释命令explain有个extended选项,这个选项在MySQL早期的版本中会产生额外的信息,这些额外的信息在explain的输入中不会显示,需要在运行explain之后使用show warnings命令查看。在5.7以上版本中,extended已经成为explain的默认选项,这也是每次执行explain命令后,总显示有一条告警的原因,extended命令选项虽然还能用,已经没有什么实际意义,纯粹是为了兼容以前的版本,在后面的版本中可能会被取消。1 sql语句 要解析的sql语句看起来稍微复杂一点,这样写的目的是尽可能的展示MySQL执行计划的不同内容,查询的是sakila数据库中管理员id为2 的仓库在所有国家用户的id,地址id,first_name,lastname,连接了sakila数据库的customer,addres,city三个表,对city的查询加了子查询,这里有意增加了复杂度,对子查询使用了联合,联合的是不是“Yemen”的国家和不是"Vietnam"的国家,联合起来就是所有的国家。对customer查询则简单使用了子查询 。mysql> explain select a.customer_id, a.address_id, a.first_name, a.last_name from customer a inner join address b on a.address_id=b.address_id inner join city c on b.city_id =c.city_id where c.country_id in (select country_id from country where country<>"Yemen" union select country_id from country where country<>"Vietnam") and a.store_id not in (select store_id from store where manager_staff_id=2);2 语句的执行计划+----+--------------------+------------+--------+--------------------+---------------------+-----------------+ | id | select_type | table | type | key | ref | Extra | +----+--------------------+------------+--------+--------------------+---------------------+-----------------+ | 1 | PRIMARY | a | ALL | NULL | NULL | Using where | | 1 | PRIMARY | b | eq_ref | PRIMARY | sakila.a.address_id | NULL | | 1 | PRIMARY | c | eq_ref | PRIMARY | sakila.b.city_id | Using where | | 4 | SUBQUERY | store | const | idx_unique_manager | const | Using index | | 2 | DEPENDENT SUBQUERY | country | eq_ref | PRIMARY | func | Using where | | 3 | DEPENDENT UNION | country | eq_ref | PRIMARY | func | Using where | | NULL | UNION RESULT | <union2,3> | ALL | NULL | NULL | Using temporary | +----+--------------------+------------+--------+--------------------+---------------------+-----------------+ select_type为PRIMARY的查询是最外层的查询,如果是子查询的时候,最外层的查询的类型是PRIMARY,外连接内连接最左边的查询其类型依旧是simple,从下面这个简单的例子可以看出来。mysql> explain select a.city,b.country from city a left join country b on a.country_id=b.country_id; +----+-------------+-------+--------+---------+---------------------+------+-------+ | id | select_type | table | type | key | ref | rows | Extra | +----+-------------+-------+--------+---------+---------------------+------+-------+ | 1 | SIMPLE | a | ALL | NULL | NULL | 600 | NULL | | 1 | SIMPLE | b | eq_ref | PRIMARY | sakila.a.country_id | 1 | NULL | +----+-------------+-------+--------+---------+---------------------+------+-------+ 2 rows in set, 1 warning (0.00 sec) DEPENDENT SUBQUERY和DEPENDENT UNION的执行依赖外层查询执行的结果,在实际的sql编写中是应该尽可能避免的。3 show warnings show warnings只对select语句有效,对update、delete和insert是无效的,命令显示的是在select语句中优化器是怎样标准化表名和列名,这里显示的sql语句是经过重写和应用优化规则后看起来的样子,还有关于优化器过程的其它信息。 show warnings的输出里面有很多特殊的标记,这给阅读和理解造成了不小的麻烦,输入的格式也不是很友好,可读性较差,下面显示的输出是经过格式化了,读起来更容易一点。mysql> show warnings\G; *************************** 1. row *************************** Level: Note Code: 1003 Message: /* select#1 */ select `sakila`.`a`.`customer_id` AS `customer_id`, `sakila`.`a`.`address_id` AS `address_id`, `sakila`.`a`.`first_name` AS `first_name`,`sakila`.`a`.`last_name` AS `last_name` from `sakila`.`customer` `a` join `sakila`.`address` `b` join `sakila`.`city` `c` where ( (`sakila`.`b`.`address_id` = `sakila`.`a`.`address_id`) and (`sakila`.`c`.`city_id` = `sakila`.`b`.`city_id`) <in_optimizer> `sakila`.`c`.`country_id`,<exists> (/* select#2 */ select 1 from `sakila`.`country` where (`sakila`.`country`.`country` <> 'Yemen') (<cache>(`sakila`.`c`.`country_id`) = `sakila`.`country`.`country_id`) union /* select#3 */ select 1 from `sakila`.`country` where (`sakila`.`country`.`country` <> 'Vietnam') (<cache>(`sakila`.`c`.`country_id`) = `sakila`.`country`.`country_id`) and ( not(<in_optimizer> (`sakila`.`a`.`store_id`,`sakila`.`a`.`store_id` in ( <materialize> (/* select#4 */ select '2' from `sakila`.`store` where 1 ), <primary_index_lookup> (`sakila`.`a`.`store_id` in <temporary table> on <auto_key> where (`sakila`.`a`.`store_id` = `materialized-subquery`.`store_id`) 1 row in set (0.00 sec) ERROR: No query specified show warnings的输出里面有很多特殊标记,其中几个常见的标记的含义如下面所示: <auto_key>是为临时表自动创建的键(索引),       <cache>表达式被执行了一次,获得的值存储到了内存中以供后来使用。如果表达式的执行结果是多个值,可能会创建一个临时表,那么这里看到的是<temporary table> 。       <exists> 子查询匹配被转换成exists匹配,子查询需要转换,然后可以和exists一起使用的。       <in_optimizer>优化器对象,对用户来说没有意义。       <materialize>使用了子查询物化       <primary_index_lookup> 使用主键来查询符合的行。 知道了上面几个特殊标记的含义,就可以对show warnings的结果做出解释来了。       /* select#1 */对应与执行计划中id为1的操作,一共有三个操作,执行的顺序是按从上到下依次执行的,这是一个三表连接操作,连接的顺序是a,b,c .后面的where条件里先出现的是这三个表 连接的join键,后面的两个部分分别对应sql语句中的其它两个where条件,<in_optimizer>表示后面的对象是优化器对象,可以不用关注。     第一个where条件的<exists>表明后面的子查询被优化器转换成了exist表达式,/* select#2 */和/* select#3 */分别对应执行计划中id为2和3的查询,id为2的是依赖子查询,id为3 的是依赖union,它们依赖外层的country表传来的country_id,首次访问country_id后,其值被缓存起来以供后续使用。     第二个where条件中的子查询使用了物化优化,/* select#4 */对应执行计划中的id为4的操作,从<primary_index_lookup>可以看出这个查询使用主键查询物化的临时表的行,使用的主键是MySQL为物化临时表自动创建的<auto_key>。 从show warnings的结果中把所有的特殊标记去掉,就是经过优化器改写和转换后的sql语句。可以看出,MySQL优化器把第一个子查询转换成了exists表达式,对第二个子查询进行了物化优化。

mysql执行计划解读--大量示例sql语句执行计划

1 演示环境的搭建1.1 MySQL数据库的安装 在LInux上安装MySQL有多种方式可以选择,最简单的是使用rpm包安装,这种方式的缺点是提供的安装选项不多,不能选择自己的数据目录、安装目录等。最灵活的安装方式是源码编译安装,可以根据需要选择编译的功能模块,但是安装过程比较复杂。在灵活性和安装复杂度折衷一点的选择是二进制包安装,可以根据需要选择安装目录和数据目录,安装过程也不算太复杂,建议选择这种方式进行MySQL安装。 从MySQL官网下载数据库二机制安装包,这里选择的MySQL5.7的最新版本,下载后解压到/usr/local目录, tar的-C选项指定解压目录。[root@iZ2ze0t8khaprrpfvmevjiZ ~]# tar -xvf mysql-5.7.34-linux-glibc2.12-x86_64.tar.gz -C /usr/local 切换到/usr/local目录,创建一个软链接mysql连接至解压后的安装软件,创建软链接的目的是管理和使用的方便,如果在一台服务器上安装了多个版本的MySQL,只需要将mysql软链接删除,链接至不同版本的安装软件即可实现版本的切换。[root@iZ2ze0t8khaprrpfvmevjiZ ~]# cd /usr/local[root@iZ2ze0t8khaprrpfvmevjiZ local]# ln -s mysql-5.7.34-linux-glibc2.12-x86_64 mysql[root@iZ2ze0t8khaprrpfvmevjiZ local]# ls -ltotal 0drwxr-xr-x  7 root root 160 Jul 28 16:33 aegisdrwxr-xr-x. 2 root root 210 Jul 20 17:27 bindrwxr-xr-x. 2 root root   6 Jun 22  2021 etclrwxrwxrwx  1 root root  35 Aug  9 13:55 mysql -> mysql-5.7.34-linux-glibc2.12-x86_64drwxr-xr-x  9 root root 129 Aug  9 13:53 mysql-5.7.34-linux-glibc2.12-x86_64 将MySQL执行文件所在的目录加入到PATH环境变量中,这样在运行MySQL相关命令时不用切换到mysql 命令所在的目录。[root@iZ2ze0t8khaprrpfvmevjiZ ~]# echo 'export PATH=$PATH:/usr/local/mysql/bin' >> ~/.bash_profile 运行一下用户默认登录脚本,使新设置的PATH变量生效[root@iZ2ze0t8khaprrpfvmevjiZ ~]# source .bash_profile 检查当前环境变量,MySQL执行文件所在目录已加入到PATH环境变量之中。[root@iZ2ze0t8khaprrpfvmevjiZ ~]# env|grep -i  pathDBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/busPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/root/bin:/usr/local/mysql/bin创建mysql用户组[root@iZ2ze0t8khaprrpfvmevjiZ ~]# groupadd mysql 创建mysql用户,用户的shell设置为/bin/false,禁止mysql用户登录。[root@iZ2ze0t8khaprrpfvmevjiZ ~]# useradd -r -g mysql -s /bin/false mysql 创建mysql数据目录,改变目录的属主为mysql:mysql[root@iZ2ze0t8khaprrpfvmevjiZ mysql]# mkdir /mysqldata[root@iZ2ze0t8khaprrpfvmevjiZ mysql]# chown mysql:mysql /mysqldata1.2 MySQL数据库初始化及启动 MySQL的安装环境准备好之后就可以对数据库进行初始化了,初始化时要指定数据库的操作系统用户,因为我这里使用的不是默认目录,所以也要指定数据目录。[root@iZ2ze0t8khaprrpfvmevjiZ bin]# mysqld --initialize --user=mysql --datadir=/mysqldata2022-08-09T06:07:51.802610Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).2022-08-09T06:07:52.836022Z 0 [Warning] InnoDB: New log files created, LSN=457902022-08-09T06:07:52.936636Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.2022-08-09T06:07:53.014709Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 9aaac001-17a9-11ed-b423-00163e2eafc6.2022-08-09T06:07:53.017913Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.2022-08-09T06:07:54.308559Z 0 [Warning] CA certificate ca.pem is self signed.2022-08-09T06:07:54.512118Z 1 [Note] A temporary password is generated for root@localhost: R_!Y2zfqq5kC 初始化成功后会创建一个临时的root账号密码,记下此密码后启动数据库,用mysqld_safe命令,同样需要指定操作系统用户和数据目录,放入后台运行。[root@iZ2ze0t8khaprrpfvmevjiZ bin]# mysqld_safe --user=mysql --datadir=/mysqldata & 登录数据库,输入上面显示的临时root密码[root@iZ2ze0t8khaprrpfvmevjiZ bin]# mysql -uroot -pEnter password:在运行命令时提示在执行命令之前必须重置密码mysql> show databases;ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement. 使用下面的命令重置root密码,退出数据库使用重置后的密码登录即可执行查询命令。mysql> ALTER USER USER() IDENTIFIED BY "******";Query OK, 0 rows affected (0.00 sec)mysql> select version();+-----------+| version() |+-----------+| 5.7.34    |+-----------+1 row in set (0.00 sec)1.3 导入sakila示例数据库 从MySQL官网上下载sakila示例数据库的创建脚本,解压后的脚本文件有两个,sakila-schema.sql创建sakila数据库及表和索引,sakila-data.sql数据库插入数据。这里的实验环境时ECS,最初从ECS控制台上用文件上传,试了之后发现上传文件最大限制为16k,改用psftp上传,psftp时putty的上传工具,只要服务端打开了ssh服务就可以使用,十分方便。 上传的目录是/root/,切换到这个目录,登录MySQL数据库运行这两个脚本即可。mysql> source sakila-schema.sqlmysql> source sakila-data.sql1.4 主要示例表介绍 本文使用sakila中的比较简答的city表来演示,这个表数据量、列数比较少,也有外键,主键,可以说是麻雀虽小,五脏俱全。下面是表的定义mysql> show create table city\G;    *************************** 1. row ***************************           Table: city    Create Table: CREATE TABLE `city` (      `city_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,      `city` varchar(50) NOT NULL,      `country_id` smallint(5) unsigned NOT NULL,      `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,      PRIMARY KEY (`city_id`),      KEY `idx_fk_country_id` (`country_id`),      CONSTRAINT `fk_city_country` FOREIGN KEY (`country_id`) REFERENCES `country` (`country_id`) ON UPDATE CASCADE    ) ENGINE=InnoDB AUTO_INCREMENT=601 DEFAULT CHARSET=utf8mb4    1 row in set (0.00 sec) 表上定义了两个索引mysql> show index from city;    +-------+------------+-------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    | Table | Non_unique | Key_name          | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |    +-------+------------+-------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    | city  |          0 | PRIMARY           |            1 | city_id     | A         |         600 |     NULL | NULL   |      | BTREE      |         |               |    | city  |          1 | idx_fk_country_id |            1 | country_id  | A         |         600 |     NULL | NULL   |      | BTREE      |         |               |    +-------+------------+-------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+    2 rows in set (0.00 sec)2 MySQL数据库执行计划 先从一个简单的例子看一下MySQL的执行计划,查看的命令是explain,使用explain查看sql语句的执行计划并不执行sql语句,只是解释输出语句的执行计划,对数据库不会造成什么影响,可以放心使用。mysql> explain select city from city;    +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+    | id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra |    +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+    |  1 | SIMPLE      | city  | NULL       | ALL  | NULL          | NULL | NULL    | NULL |  600 |   100.00 | NULL  |    +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+    1 row in set, 1 warning (0.00 sec) 上面的语句是MySQL数据库中最简单的语句,从上面可以看到,执行计划的输出有好多列,这些列的信息对推测判断sql语句的性能都有或大或小的帮助。有些列的信息非常重要,比如table,type,possible_keys,key,extra这些列,通过这些列的信息有时可以直接判断出sql语句的问题处在哪里,应该怎样对其优化。比如对上面的语句,从type的值ALL可以判断出语句执行了全表扫描,而扫描的对象是table的值city,语句的执行过程中没有可以使用的索引。3 MySQL执行计划关键信息解释3.1 id id是执行计划的第一列,即操作的序列号,如果没有子查询或者是union,就只有一个查询,各行的id值是1,否则,内部的sql语句就会按序号标识。执行计划输出的每一行是关于一个表的操作,执行的顺序是从上到下依次执行。执行计划的id号并不总是从小到大排列,即id小的操作并不总是在id大的操作之前执行,下面是一个例子:mysql> explain select  a.customer_id, a.address_id, a.first_name, a.last_name         from customer a inner join  address b on a.address_id=b.address_id          inner join city c on b.city_id =c.city_id          where  c.country_id  in (select country_id from country where country<>"Yemen"                                   union                                   select country_id from country where country<>"Vietnam")            and a.store_id not in (select store_id from store where manager_staff_id=2);  -+---------------------+------+----------+-----------------+  | id | select_type        | table      | partitions | type   | possible_keys              | key                | key_len | ref                 | rows | filtered | Extra           |  +----+--------------------+------------+------------+--------+----------------------------+--------------------+---------+---------------------+------+----------+-----------------+  |  1 | PRIMARY            | a          | NULL       | ALL    | idx_fk_address_id          | NULL               | NULL    | NULL                |  599 |   100.00 | Using where     |  |  1 | PRIMARY            | b          | NULL       | eq_ref | PRIMARY,idx_fk_city_id     | PRIMARY            | 2       | sakila.a.address_id |    1 |   100.00 | NULL            |  |  1 | PRIMARY            | c          | NULL       | eq_ref | PRIMARY                    | PRIMARY            | 2       | sakila.b.city_id    |    1 |   100.00 | Using where     |  |  4 | SUBQUERY           | store      | NULL       | const  | PRIMARY,idx_unique_manager | idx_unique_manager | 1       | const               |    1 |   100.00 | Using index     |  |  2 | DEPENDENT SUBQUERY | country    | NULL       | eq_ref | PRIMARY                    | PRIMARY            | 2       | func                |    1 |    90.00 | Using where     |  |  3 | DEPENDENT UNION    | country    | NULL       | eq_ref | PRIMARY                    | PRIMARY            | 2       | func                |    1 |    90.00 | Using where     |  | NULL | UNION RESULT       | | NULL       | ALL    | NULL                       | NULL               | NULL    | NULL                | NULL |     NULL | Using temporary |  +----+--------------------+------------+------------+--------+----------------------------+--------------------+---------+---------------------+------+----------+-----------------+ 上面的语句既有子查询也有联合(union),执行计划输出中各行的id并不都是1,对于store子查询的id是4,却在对于country的子查询(id为2)和对于country的union(id为3)之前执行。把上面的语句的where条件的顺序变一下,再看一下执行计划mysql> explain select  a.customer_id, a.address_id, a.first_name, a.last_name         from customer a inner join  address b on a.address_id=b.address_id          inner join city c on b.city_id =c.city_id          where          a.store_id not in (select store_id from store where manager_staff_id=2) and          c.country_id  in (select country_id from country where country<>"Yemen"                                   union                                   select country_id from country where country<>"Vietnam");     -+---------------------+------+----------+-----------------+     | id | select_type        | table      | partitions | type   | possible_keys              | key                | key_len | ref                 | rows | filtered | Extra           |     +----+--------------------+------------+------------+--------+----------------------------+--------------------+---------+---------------------+------+----------+-----------------+     |  1 | PRIMARY            | a          | NULL       | ALL    | idx_fk_address_id          | NULL               | NULL    | NULL                |  599 |   100.00 | Using where     |     |  1 | PRIMARY            | b          | NULL       | eq_ref | PRIMARY,idx_fk_city_id     | PRIMARY            | 2       | sakila.a.address_id |    1 |   100.00 | NULL            |     |  1 | PRIMARY            | c          | NULL       | eq_ref | PRIMARY                    | PRIMARY            | 2       | sakila.b.city_id    |    1 |   100.00 | Using where     |     |  3 | DEPENDENT SUBQUERY | country    | NULL       | eq_ref | PRIMARY                    | PRIMARY            | 2       | func                |    1 |    90.00 | Using where     |     |  4 | DEPENDENT UNION    | country    | NULL       | eq_ref | PRIMARY                    | PRIMARY            | 2       | func                |    1 |    90.00 | Using where     |     | NULL | UNION RESULT       | | NULL       | ALL    | NULL                       | NULL               | NULL    | NULL                | NULL |     NULL | Using temporary |     |  2 | SUBQUERY           | store      | NULL       | const  | PRIMARY,idx_unique_manager | idx_unique_manager | 1       | const               |    1 |   100.00 | Using index     |     +----+--------------------+------------+------------+--------+----------------------------+--------------------+---------+---------------------+------+----------+-----------------+     7 rows in set, 1 warning (0.00 sec) 对于store的子查询的id变成了2,执行顺序却放到了最后。从上面的两个sql的输出来看,操作的id好像与出现在where条件的位置有关,不过不能十分肯定。但是可以肯定一点的是,对于复杂的sql语句,where条件的执行顺序可能会影响到执行计划的选择,有可能会应影响到语句的执行性能,MySQL数据库最好不要执行复杂的sql语句。3.2 select_type,possible keys, key,table,ref table:操作的行所在的表,可以是索引,表,union, derived(派生表),子查询(物化的子查询)。 possible_keys:可用的索引 key:实际用到的索引 ref:哪一些列或常量被用来同索引(key列)比较来从表中选择行,如果这一列的值是func,使用的值是某些函数计算的结果,具体使用哪个func可以在explain语句后使用showwarning来查看。 执行的类型,simple是指简单的select操作,没有用到子查询或联合。primary是指最外层的查询,subquery,子查询里的第一个查询,dependent subquery,子查询里的第一个查询,依赖于外层查询,derived 派生表,MATERIALIZED 物化子查询,UNCACHEABLE SUBQUERY,子查询的结果不能被缓存,对外层查询的每一行,必须重新运行子查询。依赖意味着运行了关联子查询。 在编写sql时,要尽可能避免dependent subquery,UNCACHEABLE SUBQUERY,这两个类型的子查询一是影响性能,而是优化器无法对其进行优化 除了select操作外,类型也可能时insert、delete和update,下面是一个insert的例子mysql> explain insert into city(city, country_id)  values ('beitang', 108), ('johntown', '109');    +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+    | id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra |    +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+    |  1 | INSERT      | city  | NULL       | ALL  | NULL          | NULL | NULL    | NULL | NULL |     NULL | NULL  |    +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+-------+    1 row in set (0.00 sec) insert的执行计划里只有select type和table,其它的信息都为空。MySQL的insert要执行主键和唯一键的唯一性检测,应该会用到主键索引,执行计划里并没有显示。mysql> explain delete from city where city_id=108;    +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+    | id | select_type | table | partitions | type  | possible_keys | key     | key_len | ref   | rows | filtered | Extra       |    +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+    |  1 | DELETE      | city  | NULL       | range | PRIMARY       | PRIMARY | 2       | const |    1 |   100.00 | Using where |    +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------------+    1 row in set (0.00 sec) delete的执行计划里可以看到使用了主键,从上面的执行计划可以看出,语句执行的可用键是主键(possible_keys),实际使用的键也是主键(key),用常量ref值为const和主键作比较,因此,在执行delete操作时,要考虑索引的使用,在单行删除时尽可能使用主键或唯一索引作为删除条件。 mysql> explain delete from city where country_id=108;    +----+-------------+-------+------------+-------+-------------------+-------------------+---------+-------+------+----------+-------------+    | id | select_type | table | partitions | type  | possible_keys     | key               | key_len | ref   | rows | filtered | Extra       |    +----+-------------+-------+------------+-------+-------------------+-------------------+---------+-------+------+----------+-------------+    |  1 | DELETE      | city  | NULL       | range | idx_fk_country_id | idx_fk_country_id | 2       | const |    2 |   100.00 | Using where |    +----+-------------+-------+------------+-------+-------------------+-------------------+---------+-------+------+----------+-------------+    1 row in set (0.00 sec) 非唯一索引也可以用来加速delete操作。 再看一个两表关联的例子mysql> explain select a.city, b.country from city a , country b where a.country_id=b.country_id;    +----+-------------+-------+------------+------+-------------------+-------------------+---------+---------------------+------+----------+-------+    | id | select_type | table | partitions | type | possible_keys     | key               | key_len | ref                 | rows | filtered | Extra |    +----+-------------+-------+------------+------+-------------------+-------------------+---------+---------------------+------+----------+-------+    |  1 | SIMPLE      | b     | NULL       | ALL  | PRIMARY           | NULL              | NULL    | NULL                |  109 |   100.00 | NULL  |    |  1 | SIMPLE      | a     | NULL       | ref  | idx_fk_country_id | idx_fk_country_id | 2       | sakila.b.country_id |    1 |   100.00 | NULL  |    +----+-------------+-------+------------+------+-------------------+-------------------+---------+---------------------+------+----------+-------+    2 rows in set, 1 warning (0.00 sec) 从上面的执行计划可以看出,mysql先对表b进行全表扫描(执行计划的第一行),可用的keys是主键,因为是全表扫描,所以实际上没有使用主键,扫描出了109行。从执行计划的第二行可以看出,用b表的country_id(ref)同a表的索引idx_fk_country_id进行比较匹配,连接后进行输出。 在上面的sql语句中加上一个对a表的限定条件后,再看一下执行计划。 mysql> explain select a.city, b.country from city a , country b where a.country_id=b.country_id and a.city_id=1;    +----+-------------+-------+------------+-------+---------------------------+---------+---------+-------+------+----------+-------+    | id | select_type | table | partitions | type  | possible_keys             | key     | key_len | ref   | rows | filtered | Extra |    +----+-------------+-------+------------+-------+---------------------------+---------+---------+-------+------+----------+-------+    |  1 | SIMPLE      | a     | NULL       | const | PRIMARY,idx_fk_country_id | PRIMARY | 2       | const |    1 |   100.00 | NULL  |    |  1 | SIMPLE      | b     | NULL       | const | PRIMARY                   | PRIMARY | 2       | const |    1 |   100.00 | NULL  |    +----+-------------+-------+------------+-------+---------------------------+---------+---------+-------+------+----------+-------+    2 rows in set, 1 warning (0.00 sec) 在a表加了限定条件后上面的查询的意义是查询city_id为1 的城市的名称及其所属国,家,对表a(city)的查询可用的索引有两个,因为查询条件是主键,所以实际查询时采用了主键,用常量和主键进行比较,获得了一行数据。从第二行可以看出,对b表的查询也是用常量(a表的输出是一行数据)和b表的主键进行匹配。3.3 type type,在MySQL官方文档中是join类型,实际是dba常说的访问类型。不同的访问类型在性能上有很大的差异,在编写sql和sql诊断时,首先要判断type是否合适。在执行计划中,可以看到以下几种访问类型。     NULL:MySQL在优化阶段就会解决查询,取得查询的值,不需要访问表。     system:当表中只有一行时,访问类型是system,system可以看作是const的特例。看一个例子: mysql> explain select * from (select now()) a;     +----+-------------+------------+------------+--------+---------------+------+---------+------+------+----------+----------------+     | id | select_type | table      | partitions | type   | possible_keys | key  | key_len | ref  | rows | filtered | Extra          |     +----+-------------+------------+------------+--------+---------------+------+---------+------+------+----------+----------------+     |  1 | PRIMARY     | | NULL       | system | NULL          | NULL | NULL    | NULL |    1 |   100.00 | NULL           |     |  2 | DERIVED     | NULL       | NULL       | NULL   | NULL          | NULL | NULL    | NULL | NULL |     NULL | No tables used |     +----+-------------+------------+------------+--------+---------------+------+---------+------+------+----------+----------------+     2 rows in set, 1 warning (0.00 sec) 上面的执行计划中,第二个派生表的join类型是null,意味着在优化阶段就取得了now()的值,后面的extra的值是No tables used意思是在执行阶段没有访问任何表。第一个操作的类型是system,访问的表是派生表2(第二个操作产生的派生表),这个派生表只有一行,因此访问类型是system。     const:表中最多只有一个匹配的行。由于只读一次表,const表的速度非常快,当主键或唯一键同常量比较时MySQL会用到const。用主键查询一行数据就会用到const,像下面的例子: mysql> explain select city from city where city_id=1;    +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------+    | id | select_type | table | partitions | type  | possible_keys | key     | key_len | ref   | rows | filtered | Extra |    +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------+    |  1 | SIMPLE      | city  | NULL       | const | PRIMARY       | PRIMARY | 2       | const |    1 |   100.00 | NULL  |    +----+-------------+-------+------------+-------+---------------+---------+---------+-------+------+----------+-------+    1 row in set, 1 warning (0.00 sec)     eq_ref:对前面每一个表的行组合,从表中读取一行。这是除了const和system之外最好的join类型,使用时的索引是主键和非空唯一键。 mysql> explain select a.city from city a , address b where a.city_id=b.city_id and b.postal_code<23616;+----+-------------+-------+------------+--------+----------------+---------+---------+------------------+------+----------+-------------+| id | select_type | table | partitions | type   | possible_keys  | key     | key_len | ref              | rows | filtered | Extra       |+----+-------------+-------+------------+--------+----------------+---------+---------+------------------+------+----------+-------------+|  1 | SIMPLE      | b     | NULL       | ALL    | idx_fk_city_id | NULL    | NULL    | NULL             |  603 |    33.33 | Using where ||  1 | SIMPLE      | a     | NULL       | eq_ref | PRIMARY        | PRIMARY | 2       | sakila.b.city_id |    1 |   100.00 | NULL        |+----+-------------+-------+------------+--------+----------------+---------+---------+------------------+------+----------+-------------+2 rows in set, 1 warning (0.00 sec) 第一个操作返回多行数据(603),在第二个操作中使用第一个操作中的city_id匹配表city中的数据,第一个操作中的每一个city_id匹配第二个操作的表(a的主键)中的一行数据,访问类型是eq_ref.        可以看出,当索引列用等号比较时,可以用到eq_ref.ref : 对于外表中的行,从表中读取多行。可以用于等于或不等于操作符,或者是非唯一索引的情况下,比如下面的例子:mysql> explain select a.city, b.country from city a , country b where a.country_id=b.country_id;    +----+-------------+-------+------------+------+-------------------+-------------------+---------+---------------------+------+----------+-------+    | id | select_type | table | partitions | type | possible_keys     | key               | key_len | ref                 | rows | filtered | Extra |    +----+-------------+-------+------------+------+-------------------+-------------------+---------+---------------------+------+----------+-------+    |  1 | SIMPLE      | b     | NULL       | ALL  | PRIMARY           | NULL              | NULL    | NULL                |  109 |   100.00 | NULL  |    |  1 | SIMPLE      | a     | NULL       | ref  | idx_fk_country_id | idx_fk_country_id | 2       | sakila.b.country_id |    1 |   100.00 | NULL  |    +----+-------------+-------+------------+------+-------------------+-------------------+---------+---------------------+------+----------+-------+    2 rows in set, 1 warning (0.00 sec) 对于b(country)表的每一country_id, city表中有多行数据对应,join类型就是ref。 range:检索指定范围的值。key列值出用到的索引。mysql> explain select * from city where city_id<3;+----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-------------+| id | select_type | table | partitions | type  | possible_keys | key     | key_len | ref  | rows | filtered | Extra       |+----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-------------+|  1 | SIMPLE      | city  | NULL       | range | PRIMARY       | PRIMARY | 2       | NULL |    2 |   100.00 | Using where |+----+-------------+-------+------------+-------+---------------+---------+---------+------+------+----------+-------------+1 row in set, 1 warning (0.00 sec) index和all:这两个都是全扫,index扫描的对象时索引,all扫描的是表, 表的全扫比较常见,这里给出一个对索引进行全扫的例子。mysql> explain select country_id from city;     +----+-------------+-------+------------+-------+---------------+-------------------+---------+------+------+----------+-------------+     | id | select_type | table | partitions | type  | possible_keys | key               | key_len | ref  | rows | filtered | Extra       |     +----+-------------+-------+------------+-------+---------------+-------------------+---------+------+------+----------+-------------+     |  1 | SIMPLE      | city  | NULL       | index | NULL          | idx_fk_country_id | 2       | NULL |  600 |   100.00 | Using index |     +----+-------------+-------+------------+-------+---------------+-------------------+---------+------+------+----------+-------------+     1 row in set, 1 warning (0.00 sec)3.4 extra  extra列里有时会包含非常重要的信息,在查看执行计划时,尤其要关注extra列里的using filesort和using temporary ,这表示在语句的执行过程中执行了排序或临时表的创建。 Using filesort:MySQL必须一次额外的排序操作。必须重新扫描一遍获得的数据,存储排序key和指向行的指针。然后将key排序,按照顺序检索行。可以看出这个操作非常消耗资源,对sql语句执行的性能影响是巨大的。mysql> explain select city ,last_update from city order by last_update;      +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------+      | id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra          |      +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------+      |  1 | SIMPLE      | city  | NULL       | ALL  | NULL          | NULL | NULL    | NULL |  600 |   100.00 | Using filesort |      +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------+      1 row in set, 1 warning (0.01 sec) extra列的信息是Using filesort,MySQL尽可能使这个filesort在内存中完成,如果需要排序的行过多,就会在磁盘上执行,这回进步降低性能。可以在last_update列上创建索引以优化这个sql,由于索引是有序的,也就避免了额外的排序操作。 using temporary:为了获得sql语句的执行结果,mysql必须创建一个临时表用来存储中间结果,这个通常发生在order和group by使用了不同的列时。比如下面的例子: mysql> explain select a.country, a.country_id, count(b.city_id) from country a, city b where a.country_id=b.country_id group by a.country_id order by a.country;    +----+-------------+-------+------------+-------+-------------------+-------------------+---------+---------------------+------+----------+---------------------------------+    | id | select_type | table | partitions | type  | possible_keys     | key               | key_len | ref                 | rows | filtered | Extra                           |    +----+-------------+-------+------------+-------+-------------------+-------------------+---------+---------------------+------+----------+---------------------------------+    |  1 | SIMPLE      | a     | NULL       | index | PRIMARY           | PRIMARY           | 2       | NULL                |  109 |   100.00 | Using temporary; Using filesort |    |  1 | SIMPLE      | b     | NULL       | ref   | idx_fk_country_id | idx_fk_country_id | 2       | sakila.a.country_id |    5 |   100.00 | Using index                     |    +----+-------------+-------+------------+-------+-------------------+-------------------+---------+---------------------+------+----------+---------------------------------+    2 rows in set, 1 warning (0.00 sec) 这里group的列是country_id,排序的列是country_name,extra的值是Using temporary; Using filesort。Using join buffer:join时出现这个信息是MySQL将前面表的结果的一部分存储到join 缓冲区内,和内表进行匹配,这样比每一行做一次匹配性能上要提高不少。看例子之前先做点准备工作,创建两个没有索引的表。mysql> create table country_no_idx as select * from country;      Query OK, 109 rows affected (0.03 sec)      Records: 109  Duplicates: 0  Warnings: 0mysql> create table city_no_idx as select * from city;      Query OK, 600 rows affected (0.03 sec)      Records: 600  Duplicates: 0  Warnings: 0下面的执行计划使用了join buffermysql> explain select a.country, b.city from country_no_idx a, city_no_idx b where a.country_id=b.country_id;      +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+      | id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows | filtered | Extra                                              | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+      |  1 | SIMPLE      | a     | NULL       | ALL  | NULL          | NULL | NULL    | NULL |  109 |   100.00 | NULL                                               |      |  1 | SIMPLE      | b     | NULL       | ALL  | NULL          | NULL | NULL    | NULL |  600 |    10.00 | Using where; Using join buffer (Block Nested Loop) |      +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+      2 rows in set, 1 warning (0.00 sec) join buffer是MySQL对嵌套连接的一种优化,优化效果未必理想,可以使用索引来避免使用join buffer,具体做法是在内表(如果是内连接,在行数较大的表上)上创建一个索引。4 使用物化优化子查询 MySQL优化器使用物化优化子查询。所谓的物化创建一个临时表,用来缓存子查询的结果,这个表通常在内存中创建。当MySQL第一次需要子查询的结果时,对子查询进行物化,优化器也可能对物化的临时表创建哈希所以以加速查询。物化临时表尽可能存在与内存中,表太大时会在磁盘上存储。 下面是子查询物化优化的一个例子。mysql> explain select * from city where country_id in (select country_id from country_no_idx  where country_id=100);      +----+--------------+----------------+------------+--------+-------------------+-------------------+---------+------------------------+------+----------+-----------------------+      | id | select_type  | table          | partitions | type   | possible_keys     | key               | key_len | ref                    | rows | filtered | Extra                 |      +----+--------------+----------------+------------+--------+-------------------+-------------------+---------+------------------------+------+----------+-----------------------+      |  1 | SIMPLE       | city           | NULL       | ref    | idx_fk_country_id | idx_fk_country_id | 2       | const                  |    6 |   100.00 | Using index condition |      |  1 | SIMPLE       |    | NULL       | eq_ref |        |        | 2       | sakila.city.country_id |    1 |   100.00 | Using where           |      |  2 | MATERIALIZED | country_no_idx | NULL       | ALL    | NULL              | NULL              | NULL    | NULL                   |  109 |    10.00 | Using where           |      +----+--------------+----------------+------------+--------+-------------------+-------------------+---------+------------------------+------+----------+-----------------------+      3 rows in set, 1 warning (0.00 sec) 从上面例子中的id为3的操作中看到,MySQL优化器对子查询的结果进行了物化,并且创建了哈希索引,这从id 2操作的possible keys和key的值可以看出。 对子查询进行物化虽然对sql的性能有一定提升,但是频繁的创建物化临时表对内存和性能都会有一定的压力,对这类子查询进一步优化的措施是子查询使用唯一索引或主键查询,  mysql> explain select * from city where country_id in (select country_id from country where country_id=100);      +----+-------------+---------+------------+-------+-------------------+-------------------+---------+-------+------+----------+-------------+      | id | select_type | table   | partitions | type  | possible_keys     | key               | key_len | ref   | rows | filtered | Extra       |      +----+-------------+---------+------------+-------+-------------------+-------------------+---------+-------+------+----------+-------------+      |  1 | SIMPLE      | country | NULL       | const | PRIMARY           | PRIMARY           | 2       | const |    1 |   100.00 | Using index |      |  1 | SIMPLE      | city    | NULL       | ref   | idx_fk_country_id | idx_fk_country_id | 2       | const |    6 |   100.00 | NULL        |      +----+-------------+---------+------------+-------+-------------------+-------------------+---------+-------+------+----------+-------------+      2 rows in set, 1 warning (0.00 sec)

mysql数据库误删数据文件怎么处理?

1 linux操作系统删除文件的原理 在使用rm删除文件时,Linux操作系统并不对这个文件的inode和数据块做清除操作,而是从父目录的块里删除这个文件的名字,这个文件的名字是指向文件的inode的。 如果一个文件有硬链接,执行rm命令,就是删除一个硬链接,文件本身还在。当文件打开时,打开文件的进程会维护一个文件描述符表,包含此文件的文件描述符,这个文件描述符通过系统级的打开文件表,指向系统级的inode表里的inode条目,这个inode就是打开文件的inode的内存拷贝。通过文件描述符,同样可以操作文件,利用这个特性,在打开文件被误删除时,可以利用进程的文件描述符找到被误删除的文件,拷贝文件至原目录就可以恢复了。下面的图片来自网络,可以看的更明白点: 2 MySQL数据库运行时被误删数据文件的处理2.1 处理思路 明白了linux操作系统下删除文件的原理,对MySQL正在运行时数据文件被误删除应该怎样处理也就有了大概的思路。 在这个时候,要注意的是不能关闭数据库,如果数据库关闭了,文件描述符也就没了,对文件的引用没了,这时要恢复文件就得想其它的办法了。数据库不能关闭,连接数据库的应用则必须要关闭,否则,我们拷贝的数据文件可能不是最新的,也可能会有不一致的数据。 关闭应用后,就可以利用文件描述符来拷贝恢复被误删的文件了。2.2 故障模拟 为了使故障模拟简单直观一点,使用独立的表空间创建一张表,表的名字是t,表的数据文件是t.ibd,在数据库运行时,将此文件删除。  ​rm t.ibd​2.3 处理步骤查找mysqld进程的进程号 要想找到被删除文件的文件描述符,首先要找到打开文件的进程,myql数据库时单进程多线程模型,打开文件的进程即是MySQL数据库的后台进程mysqld ​ ps -ef|grep mysqld​ 这里查到的mysqld的进程id是563,进入进程目录,进程目录fd目录下是进程打开的文件 ​ cd /proc/563​​  ls​​  cd fd​ 进入目录显示进程打开的文件​ls -l​    lr-x------ 1 root root 64 Jul 23 09:33 0 -> /dev/null    l-wx------ 1 root root 64 Jul 23 09:33 1 -> /usr/local/mysql-5.7.34/data/DESKTOP-FVJ8TG1.err    lrwx------ 1 root root 64 Jul 23 09:33 10 -> /usr/local/mysql-5.7.34/data/ibtmp1    lrwx------ 1 root root 64 Jul 23 09:33 11 -> '/tmp/ibdI0AsK (deleted)'    lrwx------ 1 root root 64 Jul 23 09:33 12 -> /usr/local/mysql-5.7.34/data/mysql/servers.ibd    lrwx------ 1 root root 64 Jul 23 09:33 27 -> /usr/local/mysql-5.7.34/data/test/t1.ibd    lrwx------ 1 root root 64 Jul 23 09:33 28 -> '/usr/local/mysql-5.7.34/data/test/t.ibd (deleted)' 最后一行是被删除的t.ibd文件,它的文件句柄是28,通过文件句柄可以拷贝文件,将拷回原目录  ​cp 28 /usr/local/mysql-5.7.34/data/test/t.ibd​ 登录数据库检查数据    mysql> use test;        Database changed    mysql> select count(*) from t;        +----------+        | count(*) |        +----------+        |     1000 |        +----------+        1 row in set (0.00 sec)2.4 后续处理 数据库重启后报下面错误    ​mysql> select count(*) from t;​    ERROR 1812 (HY000): Tablespace is missing for table `test`.`t`.表空间找不到了,这时候表的定义在数据库还在,表的数据文件也在,只要重新导入表空间就可以了。导入表空间前先检查一下文件的属主和权限,由于是用root用户做的拷贝,文件的属主也是root,需要改为mysql,进入操作系统更改文件属主为MySQL用户 ​ chown mysql:mysql t.ibd​ 重新导入表空间  ​ alter table t import tablespace;​ 检查数据   ​mysql> select count(*) from t;​   +----------+   | count(*) |   +----------+   |     1000 |   +----------+   1 row in set (0.00 sec) 可以查询到数据了。

oracle 数据库中的行锁和死锁

1 什么是行锁,为什么会发生行锁? 当一条sql语句更新或删除一行数据时,事务只获得这一行的锁,获得的这个锁就是行锁。锁这一行的是某一个事务,而不是会话。oracle的规则是这样的,写一行数据库数据时加行锁,读一行数据不对行枷锁,写阻塞写,写不阻塞读,读不阻塞读和写。 当多个事务同时更新(包括删除) 统一数据时(表或索引),就会发生行锁竞争,在Oracle等待事件里就是enq: TX - row lock contention enq,enq时队列的意思,在oracle数据库里同锁是一个意思。 发生行锁竞争一个最常见的场合时多个会话同时更新或删除同一表的同一条数据,或者是会话更新一行数据的同时其它会话正在删除数据,反过来也一样。 表上有唯一索引时,插入数据有时也会导致行锁,这主要时由于插入数据时唯一索引的值引起的冲突,产生唯一索引的值的方法不正确,是表设计时唯一索引的选择不正确的结果。 产生行锁竞争的另一个常见的场景是位图索引,位图索引的一个值对应表中的多个值,虽然我们在表中更新的是不同的值,在位图索引中对应的是一个值,这样即使我们更新的表中的不同的值也会导致行锁竞争。这也是位图索引在oltp事务中被禁止使用的原因。2 发生行锁时如何分析诊断? 数据库发生行锁会导致怎样的后果。受到直接影响的发生行锁竞争的会话及应用,行锁竞争直接导致会话被阻塞,应用得不到响应。行锁竞争也会消耗大量的计算资源,从而影响整个数据库的性能。 oracle数据库有多种工具可以用于行锁的诊断分析,如常用的awr报告及ash报告,也可以使用视图来对行锁进行分析。下面的是实验模仿一个最简单的行锁场景,对行锁进行分析及处理。2.1 数据准备 先创建一个简单的表,插入几行数据,表的主键在这里不是必须的。SQL> create table test(id int primary key, name varchar2(20),salary number); Table created.SQL> insert into test values (1, 'zhangsan', 2000); 1 row created.SQL> insert into test values (2, 'lisi', 2500); 1 row created.SQL> insert into test values (3, 'wangwu', 2100); 1 row created.2.2 模仿行锁在第一个会话中执行以下操作检查以下autocommit是否关闭,如果没有关闭,需要关闭后在执行sql语句。关闭的命令是:set autocommit off;SQL> show auto autocommit OFFautocommit已关闭SQL> update test set salary=2000 where id=1; 1 row updated.另开一个会话,关闭自动提交后更新同一行数据SQL> show autocommit;autocommit OFFSQL> update test set salary=2200 where id=1;这个会话卡住了2.3 行锁分析 发生行锁时,数据库中会有被锁定的对象,这个可以从视图 v$locked_object查询到。SQL>  select object_id ,session_id, locked_mode from v$locked_object;            OBJECT_ID SESSION_ID LOCKED_MODE            ---------- ---------- -----------                 76280        378           3                 76280        471           3 对象76280被会话378,和471 同时锁定,查询dba_objects 视图,可以知道被锁定的对象正是update语句更新的表格test。 另一个视图v$lock可以看到数据库中锁的详细信息,该视图的type列的值是锁类型,用oracle的术语说是2字母的资源标识符,用户类型的有三个TM - DML enqueue(数据操作语言锁),TX - Transaction enqueue(事务锁),UL - User supplied,其它的是系统锁。SQL>  select SID,TYPE,ID1,ID2,LMODE,REQUEST,BLOCK from v$lock where lmode in (3,6);            SID TY        ID1        ID2      LMODE    REQUEST      BLOCK     ---------- -- ---------- ---------- ---------- ---------- ----------            399 RT          1          0          6          0          0            401 TS      65539          1          3          0          0            393 KD          0          0          6          0          0            401 TS     196611          1          3          0          0            471 TX     655363        994          6          0          1            378 TM      76280          0          3          0          0            471 TM      76280          0          3          0          0     7 rows selected. 有一个视图dba进程会查询,这就是v$session视图,这个视图中可以看到会话及阻塞此会话的会话,等待事件等许多有用的信息。SQL> select sid, BLOCKING_SESSION,BLOCKING_SESSION_STATUS,command, event from v$session where BLOCKING_SESSION is not null;            SID BLOCKING_SESSION BLOCKING_SE    COMMAND EVENT     ---------- ---------------- ----------- ---------- ----------------------------------------------------------------            378              471 VALID                6 enq: TX - row lock contention从上面可以看到会话378被会话471阻塞,等待事件是enq: TX - row lock contention,如果我们想知道行锁竞争发生在哪个表的那行数据,v$session会话也提供了这个信息,用下面语句可以查询到:SQL> select event,ROW_WAIT_OBJ#,ROW_WAIT_FILE#,ROW_WAIT_BLOCK# ,ROW_WAIT_ROW# from v$session  where BLOCKING_SESSION is not null;      EVENT                                                            ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#      ---------------------------------------------------------------- ------------- -------------- --------------- -------------      enq: TX - row lock contention                                            76280             12             134            0 从语句的输入可以看到,行锁竞争发生的等待对象id是76280,正好是更新语句的对象id,数据文件id是12,块id是134,行号是0 查询dba _objects对象可以获得更新的表的名称。 SQL> select OWNER,OBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE from dba_objects where DATA_OBJECT_ID=76280      OWNER      OBJECT_NAM  OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE      ---------- ---------- ---------- -------------- -----------------------      TEST       TEST            76280          76280 TABLE要想查询到行锁竞争发生在表中的哪一行,需要用到一个oracle的工具包dbms_rowid,这个工具包可以获得表中一行数据的对象id,文件号,块号,行id,运行下面的语句可以查询到行锁竞争发生在表的哪一行。SQL> select     id ,                rowid,                dbms_rowid.rowid_object(rowid) "object",                dbms_rowid.rowid_relative_fno(rowid) "file",                dbms_rowid.rowid_block_number(rowid) "block",                dbms_rowid.rowid_row_number(rowid) "row"            from test.test where dbms_rowid.rowid_row_number(rowid)=0;  6    7              ID ROWID                  object       file      block        row      ---------- ------------------ ---------- ---------- ---------- ----------               1 AAASn4AAMAAAACGAAA      76280         12        134          0可以看到行锁竞争发生在test表中id为1 的行,这一行的信息和v$session表中的信息是一致的。在实际的行锁分析中,知道行锁竞争发生的具体信息有时十分有用,可以让我们快速找到导致行锁的应用,特别是多个应用执行同一语句的时候或者一个应用给多个业务部门使用时。2.4 行锁处理 发生行锁时处理的一般办法是找到阻塞的会话,在数据库或操作系统里杀死阻塞的对话。​SQL> select sid, serial# from v$session where sid in ( select distinct blocking_session from v$session);​       SID    SERIAL#---------- ----------         1      36638在数据库里用下面语句杀掉会话SQL> alter system kill session '1,36638'; System altered.数据库的告警日志里有下面信息2022-08-05T09:25:36.879896+08:00ORCLPDB1(3):KILL SESSION for sid=(1, 36638):ORCLPDB1(3):  Reason = alter system kill sessionORCLPDB1(3):  Mode = KILL SOFT -/-/-ORCLPDB1(3):  Requestor = USER (orapid = 70, ospid = 25353, inst = 1)ORCLPDB1(3):  Owner = Process: USER (orapid = 71, ospid = 25414)ORCLPDB1(3):  User = oracleORCLPDB1(3):  Program = sqlplus@iZ2ze0t8khaprrpfvmevjiZ (TNS V1-V3)ORCLPDB1(3):  Result = ORA-0kill掉的会话已经从数据库里退出登录​SQL> select * from dual;​select * from dual*ERROR at line 1:ORA-01012: not logged onProcess ID: 25414Session ID: 1 Serial number: 36638如果在数据库里无法杀掉阻塞会话,可以查到会话的操作系统pid后,使用kill或kill -9命令杀掉操作系统进程​SQL> select a.sid, b.spid from v$session a , v$process b​​ where a.paddr=b.addr and​​ a.sid in (select distinct blocking_session from v$session);​       SID SPID---------- ------------------------         1 25534 也可以用下面的语句生成操作系统下kill会话的命令,​SQL> select 'kill -9 ' || b.spid ||';' kill_session from​​        v$session a , v$process b​​       where a.paddr=b.addr​​       and a.sid in​​             (select distinct blocking_session from v$session); ​KILL_SESSION---------------------------------kill -9 25534;转到操作系统下直接运行上面的命令[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ kill -9 25534;被杀到的会话状态如下所示:​SQL> select * from dual;​select * from dual       *ERROR at line 1:ORA-03113: end-of-file on communication channelProcess ID: 25534Session ID: 1 Serial number: 57400数据库告警日志里面没有操作系统下kill 进程的信息。3 死锁及死锁的诊断处理3.1 Oracle对于死锁的定义及处理Oracle官方对死锁的定义是这样的 A deadlock is a situation in which two or more users are waiting for data locked by each other. Deadlocks prevent some transactions from continuing to work.  Oracle Database automatically detects deadlocks and resolves them by rolling back  one statement involved in the deadlock, releasing one set of the conflicting row locks.     The database returns a corresponding message to the transaction that undergoes statement-level rollback. The statement rolled back belongs to the transaction that detects the deadlock. Usually, the signalled transaction should be rolled back explicitly, but it can retry the rolled-back statement after waiting. 从上面可以看出,Oracle自动检测死锁,并回滚涉及到死锁的一个语句。数据库返回相应的信息给经历语句级回滚的事务。回滚的语句属于检测到死锁的事务。3.2 模拟一个死锁--session 1查看一下会话的sid​SQL> select userenv('sid') from dual;​    USERENV('SID')    --------------               425关闭自动提交​SQL> set autocommit off;​​SQL> show autoc​      autocommit OFFmain session作为监控会话​SQL>  select a.sid, b.pid from v$session a ,v$process b where a.paddr=b.addr and a.sid=425;​       SID        PID---------- ----------       425         64再打开另一个会话,作为会话2--session2​SQL> set autocommit off;​​SQL> show autoc​    autocommit OFF按照顺序再会话1和2执行下面语句--session 1​SQL> update test set salary=465;​--session2​SQL> update test2 set salary=473;​--session1​SQL> update test2 set salary=466;​--session2​SQL> update test set salary=474;​10秒之后,会话1报一下错误update test2 set salary=466                 *ERROR at line 1:ORA-00060: deadlock detected while waiting for resource3.3 死锁的诊断数据库的告警日志中含有一下信息2022-08-01T14:48:39.788730+08:00      ORCLPDB1(3):ORA-00060: Deadlock detected. See Note 60.1 at My Oracle Support for Troubleshooting ORA-60 Errors. More info in file /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_13275.trc.从告警日志中可以看到,更多的信息可以再会话进程的跟踪文件中看到​[oracle@iZ2ze0t8khaprrpfvmevjiZ trace]$ more /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_13275.trc[Transaction Deadlock]      The following deadlock is not an ORACLE error. It is a      deadlock due to user error in the design of an application      or from issuing incorrect ad-hoc SQL. The following      information may aid in determining the deadlock:      Deadlock graph:                                                ------------Blocker(s)-----------  ------------Waiter(s)------------      Resource Name                             process session holds waits serial  process session holds waits serial      TX-000A0003-000004B0-79F3F24D-00000000         64     425     X        42850      65     456           X  46887      TX-00070006-0000049A-79F3F24D-00000000         65     456     X        46887      64     425           X  42850      ----- Information for waiting sessions -----      Session 425:        sid: 425 ser: 42850 audsid: 9310003 user: 109/TEST        pdb: 3/ORCLPDB1          flags: (0x41) USR/- flags2: (0x40009) -/-/INC          flags_idl: (0x1) status: BSY/-/-/- kill: -/-/-/-/-        pid: 64 O/S info: user: oracle, term: UNKNOWN, ospid: 13275          image: oracle@iZ2ze0t8khaprrpfvmevjiZ  client details:          O/S info: user: oracle, term: pts/1, ospid: 13273          machine: iZ2ze0t8khaprrpfvmevjiZ program: sqlplus@iZ2ze0t8khaprrpfvmevjiZ (TNS V1-V3)          application name: SQL*Plus, hash value=3669949024        current SQL:        update test2 set salary=463      Session 456:        sid: 456 ser: 46887 audsid: 9340002 user: 109/TEST        pdb: 3/ORCLPDB1          flags: (0x41) USR/- flags2: (0x40009) -/-/INC          flags_idl: (0x1) status: BSY/-/-/- kill: -/-/-/-/-        pid: 65 O/S info: user: oracle, term: UNKNOWN, ospid: 13432          image: oracle@iZ2ze0t8khaprrpfvmevjiZ  client details:          O/S info: user: oracle, term: pts/2, ospid: 13430          machine: iZ2ze0t8khaprrpfvmevjiZ program: sqlplus@iZ2ze0t8khaprrpfvmevjiZ (TNS V1-V3)          application name: SQL*Plus, hash value=3669949024        current SQL:        update test set salary=471进程的跟踪文件中可以看到死锁的详细信息,如发生死锁的资源(这里时数据库里一行数据,每个资源的持有者,等待者,发生死锁时正在执行的语句等)。 值得注意的是,Oracle在处理死锁问题时,只是做语句级的回滚,并不退出事务和会话,事务仍然需要提交或回滚。

Oracle数据库---怎样获得sql语句执行时的绑定变量

1 什么时候需要获得绑定变量  Oracle数据库一个常用的优化技术是绑定变量,如果使用得到,绑定变量可以大幅度减少物理解析,提高sql语句的执行效率。反过来讲,使用绑定变量又给我们排查总成了一定的困难,使用了绑定变量的sql语句在大部分数据库诊断工具中不再显示sql语句执行时的值,而是显示绑定变量,这使我们难以准确判断到底时那条语句执行时出了问题。     行锁和缓冲区忙等待事件是dba经常遇到的,有一个场景比较常见,就是有个一表有多个应用访问,而对这个表的访问恰恰造成了行锁或缓冲区忙,导致大量会话被阻塞这时候需要判断是哪个应用的异常导致了会话的阻塞,由于语句是使用了绑定变量,在awr报告或ash报告中看不到sql执行时的具体值,这个时候如果能够看到sql语句执行时的具体值,从这些值就能判断出哪个或哪些应用出了问题,缩小故障范围。      获得绑定变量可以用orace 的10046事件,10046是Oracle的内部事件(event),通过设置这个事件可以获得Oracle内部执行系统解析、调用、等待、绑定变量等详细的trace信息,在数据库性能分析方面有着重要的作用。10046 我们常用到1,4,8,12 四个不同的等级,1 相当于打开sql_trace,4在1的基础上增加了绑定变量,8在1的基础上增加了等待事件,12 在1级的输出基础上增加了绑定变量和等待事件,获得绑定变量的信息,用等级4就可以。 监控当前会话直接设置10046事件即可,想要获得其它会话的绑定变量信息,可以用oradebug,或者dbms.dbms_monitor、DBMS_SYSTEM 包设置10046 事件,具体的步骤如下所述。2 使用dbms_monitor包获得sql语句执行时的绑定变量 使用dbms_monitor包跟踪其它会话,首先要获得要跟踪会话的sid(会话id)和serial#,当数据库中有大量被阻塞会话使,可以从v$session视图中查询到阻塞会话(blocking session)的sid,然后在v$session中查询这个会话的serial#。在实验的时候,在被跟踪的会话中运行下面的命令获得会话sid。​SQL> select userenv('sid') from dual;​          USERENV('SID')          --------------                     440 获得要跟踪会话的sid之后,就要取得这个会话的查询的serial#及tracefile,虽然我们使用dbms_monitor包跟踪其它会话,跟踪产生的trace文件是被跟踪会话的tracefile,而不是我们运行dbms_monitor包的这个会话,这个如果弄错了,会查不到跟踪的内容。 serial#可以在v$session中获得,tracefile需要从v$process中获得,join这两个视图,可以查询到会话的serial#和tracefile。​SQL> select a.sid,a.serial#, b.spid, b.TRACEFILE from v$session a , v$process b where a.paddr=b.ADDR and a.sid=440;​       SID    SERIAL# SPID---------- ---------- ------------------------TRACEFILE--------------------------------------------------------------------------------       440      56821 4093/opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_4093.trcexec 取得必要的信息后,就可以使用dbms_monitor包激活这个会话的10046事件了,使用下面这个语句:​dbms_monitor.session_trace_enable(session_id=>440,serial_num=>56821,waits=>true,binds=>true);​ 上面的语句中第一个参数是sid,第二个参数是serial#,这两个参数的值也可以用''包括起来,如果用双引号就会报错。waits和binds都是true,在跟踪信息中包括等待事件和绑定变量。 在被跟踪的会话中执行一条语句​SQL> update test set salary=2000 where id=2;​​1 row updated.​ 在跟踪会话中运行下面语句,关闭会话跟踪。exec dbms_monitor.session_trace_disable(session_id=>440,serial_num=>56821); 查询一下跟踪文件​SQL> !more /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_4093.trc​PARSING IN CURSOR #140411130438576 len=332 dep=1 uid=0 oct=3 lid=0 tim=64183077659 hv=2698389488 ad='68d79de8' sqlid='acmvv4fhdc9zh'select obj#,type#,ctime,mtime,stime, status, dataobj#, flags, oid$, spare1, spare2, spare3, signature, spare7, spare8, spare9, nvl(dflcollid, 16382), creappid, creverid, modappid, modverid, crepatchid, modpatchid from obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null and linkname is null and subname is nullEND OF STMTPARSE #140411130438576:c=173,e=173,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=64183077658BINDS #140411130438576: Bind#0  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00  oacflg=00 fl2=1000001 frm=00 csi=00 siz=176 off=0  kxsbbbfp=7fb4038d01e0  bln=22  avl=03  flg=05  value=109 Bind#1  oacdty=01 mxl=128(04) mxlc=00 mal=00 scl=00 pre=00  oacflg=10 fl2=0001 frm=01 csi=873 siz=0 off=24  kxsbbbfp=7fb4038d01f8  bln=128  avl=04  flg=01  value="TEST" Bind#2  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00  oacflg=00 fl2=1000001 frm=00 csi=00 siz=0 off=152  kxsbbbfp=7fb4038d0278  bln=22  avl=02  flg=01  value=1上面截取的是跟踪文件的一部分,由于是第一运行这个语句,需要对这个语句及逆行硬解析,上面的语句是运行硬解析时运行的一条语句,在obj$视图中查询要对象信息,这条语句有三个绑定变量,在语句中分别用:1,:2,:3来表示,它们的实际值在bind#0,bind#1,bind#2中,绑定变量1 是owner#,这个值是数值类型,值是109,查一下这个owner下的对象  SQL> l    1* select obj#, owner#,name from obj$  SQL> a  where owner#=109;    ​1* select obj#, owner#,name from obj$ where owner#=109​  SQL> /        OBJ#     OWNER#       NAME  ---------- ----------     ---------       76281        109     SYS_C008220       76280        109      TEST       76290        109      TEST2再查一下这个owner的name ​ SQL> select username from dba_users where user_id=109;​  USERNAME  --------------------------------------------------------------------------------  TEST正式我们被跟踪会话登录数据库时的用户名。第二个绑定变量是名字,从obj$的定义从我们可以知道这个名字是对象名,它的值是TEST,正好是更新语句的表名。3 使用oradebug跟踪其它会话 使用oradebug跟踪其它会话要获得被跟踪会话的oracle pid后os pid,这里我们获得会话的oacle pid。 同样,先获得要跟踪会话的sid,在被跟踪会话中运行下面的查询语句。 ​ SQL> select userenv('sid') from dual;​ 查询进程的oracle pid​  SQL> select a.sid, b.pid from v$session a ,v$process b where a.paddr=b.addr and a.sid=30;​       SID        PID---------- ----------        30         66 设置oradebug会话为跟踪会话​SQL> oradebug setorapid 66​        Oracle pid: 66, Unix process pid: 4134, NID: 4026531836, image: 从命令的输出可以看到会话的oracle pid是66,os pid是4134.oracle@iZ2ze0t8khaprrpfvmevjiZ 检查一下当前的tracefile命名​SQL> oradebug tracefile_name​        /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_4134.trc tracefile名字中包含被跟踪会话的os pid。 如果需要更改一下trace file的名字,便于查找,可以在被跟踪的会话内更改tracefile标识符​SQL> alter session set tracefile_identifier=test;​ 回到监控会话,tracefile_name已经改变​SQL>  oradebug tracefile_name​    /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_4134_TEST.trc设置oradebug的会话后,就可以打开会话的10046 事件了,用下面的命令,打开10046 4级事件。​SQL> oradebug event 10046 trace name context forever,level 4;​    Statement processed.等一段时间后,用下面命令关闭打开的事件​SQL> oradebug event 10046 trace name context off;​    Statement processed.看一下跟踪文件​SQL> !more /opt/oracle/diag/rdbms/orclcdb/ORCLCDB/trace/ORCLCDB_ora_4134_TEST.trc​    PARSING IN CURSOR #139953117550368 len=39 dep=0 uid=109 oct=6 lid=109 tim=66133499254 hv=3517068083 ad='664ffb90' sqlid=    '54b1ft78u4ctm'    update test2 set salary=1500 where id=3    END OF STMT    PARSE #139953117550368:c=61181,e=63361,p=0,cr=599,cu=0,mis=1,r=0,dep=0,og=1,plh=1989979325,tim=66133499254    EXEC #139953117550368:c=243,e=243,p=0,cr=2,cu=3,mis=0,r=1,dep=0,og=1,plh=1989979325,tim=66133499544    STAT #139953117550368 id=1 cnt=0 pid=0 pos=1 obj=76290 op='UPDATE  TEST2 (cr=2 pr=0 pw=0 str=1 time=178 us)'    STAT #139953117550368 id=2 cnt=1 pid=1 pos=1 obj=76290 op='TABLE ACCESS FULL TEST2 (cr=2 pr=0 pw=0 str=1 time=47 us cost    =3 size=7 card=1)' 可以看到跟踪的结果,绑定变量已在前面介绍,这里就不再重述了。4 DBMS_SYSTEM包 DBMS_SYSTEM包的用法和dbms_monitor包类似,只是语法上稍有不同,这里不再介绍。

linux使用技巧:使用脚本处理文件

linux三剑客grep、sed和AWK的功能十分强大,用好这三个工具,可以显著提高运维的效率,本文记录了运维过程中如何使用这三个工具,主要是备查,各位的运维小伙伴也可以参考参考。1 使用脚本创建示例文件及文件显示​[root@iZuf6b1znamggglbunalhzZ ~]# for i in $(seq 1 10);do echo $i >>test.txt;done;​​[root@iZuf6b1znamggglbunalhzZ ~]# cat test.txt​​1​2345678910 上面的脚本用的是for循环,seq命令打印一个数字序列,这个脚本,是不是可以写得更简单一点,当然可以,不用for循环,直接写成seq 1 10 》 testtxt 结果也是一样的,linux bshell的写法非常灵活这个for循环也可以写成另一种形式for i in `seq 1 10`;do echo $i >>test.txt; done; `  `里面可以换成其它shell命令,比如换成ls -l​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# for i in `ls `; do echo $i ;done;​      my_oss      mysql-5.7.34-linux-glibc2.12-x86_64.tar.gz      oracle-database-ee-21c-1.0-1.ol8.x86_64.rpm      oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm      test.txt 这里的for循环用的是列表形式,for循环里的循环体可以写的得更复杂一下,比如在前面和后面加上一些字符串​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# for i in $(seq 1 10);do echo test $i":this is test1" >>test.txt;done;​​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# cat test.txt​      test 1:this is test1      test 2:this is test1      test 3:this is test1      test 4:this is test1      test 5:this is test1      test 6:this is test1      test 7:this is test1      test 8:this is test1      test 9:this is test1      test 10:this is test1 清空一个文本文件可以直接删除,如果要保留文件只是清空里面的内容,可以拷贝或移动空文件至目标文件,比如用cp命令​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# cp /dev/null test.txt​      cp: overwrite 'test.txt'? y​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# cat /dev/null​ cat还有一些有用的选型,比如-n选项可以在每行的左边显示行号[root@iZuf6b1znamggglbunalhzZ ~]# cat -n  /var/log/messages|head -10     1  Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Unit rsyslog.service entered failed state.     2  Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: rsyslog.service failed.     3  Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopped Dump dmesg to /var/log/dmesg.     4  Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopping Session 2 of user root.     5  Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopped target Timers. 如果文件太大了,一页显示不下,就要用到more命令,这个命令每次显示一页,按下空格键显示下一页,这个命令也有一些比较实用的使用技巧,例如想从第20行开始显示文件,可以用下面的命令​[root@iZuf6b1znamggglbunalhzZ ~]# more +20 /var/log/messages​    Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopping Command Scheduler...    Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopping NTP client/server...    Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopping OpenSSH server daemon...    Nov 30 15:31:43 iZuf6b1znamggglbunalhzZ systemd: Stopping Job spooling tools... 显示文件开头几行用head命令,下面的命令显示文件开头的5行​[root@iZuf6b1znamggglbunalhzZ ~]# head -5 /etc/passwd​    root:x:0:0:root:/root:/bin/bash    bin:x:1:1:bin:/bin:/sbin/nologin    daemon:x:2:2:daemon:/sbin:/sbin/nologin    adm:x:3:4:adm:/var/adm:/sbin/nologin    lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin 显示末尾的几行用tail命令,这个命令除了显示末尾的几行文件外,用-f选项还可以持续跟踪文件的变化,比如下面的命令显示文件最后10行,当文件有新行加入时会显示新加入的行​[root@iZuf6b1znamggglbunalhzZ ~]# tail -f -n 10 /var/log/messages​    Jul 15 14:53:33 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 44.    Jul 15 14:53:54 iZuf6b1znamggglbunalhzZ systemd: Started Session 45 of user root.    Jul 15 14:53:54 iZuf6b1znamggglbunalhzZ systemd-logind: New session 45 of user root.    Jul 15 14:53:54 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 45.    Jul 15 14:53:55 iZuf6b1znamggglbunalhzZ systemd: Started Session 46 of user root.    Jul 15 14:53:55 iZuf6b1znamggglbunalhzZ systemd-logind: New session 46 of user root.    Jul 15 14:53:55 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 46.    Jul 15 14:53:56 iZuf6b1znamggglbunalhzZ systemd: Started Session 47 of user root.    Jul 15 14:53:56 iZuf6b1znamggglbunalhzZ systemd-logind: New session 47 of user root.    Jul 15 14:53:56 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 47.    Jul 15 14:54:40 iZuf6b1znamggglbunalhzZ systemd: Started Session 48 of user root.    Jul 15 14:54:40 iZuf6b1znamggglbunalhzZ systemd-logind: New session 48 of user root.    Jul 15 14:54:40 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 48.    Jul 15 14:54:41 iZuf6b1znamggglbunalhzZ systemd: Started Session 49 of user root.    Jul 15 14:54:41 iZuf6b1znamggglbunalhzZ systemd-logind: New session 49 of user root.    Jul 15 14:54:41 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 49.    Jul 15 14:54:42 iZuf6b1znamggglbunalhzZ systemd: Started Session 50 of user root.    Jul 15 14:54:42 iZuf6b1znamggglbunalhzZ systemd-logind: New session 50 of user root.    Jul 15 14:54:42 iZuf6b1znamggglbunalhzZ systemd-logind: Removed session 50.统计文本文件的行数、列数、字符数、字节数用wc命令​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# wc /etc/passwd​  33   70 1733 /etc/passwd 第一列33是文件的行数,第二列70是文件的单词数,单词是以空格为分隔符的,第三列1733是字符数或者字节数 单独查看行数用-l选项,单词数用-w选项,字符数用-m选项,字节数用-c选项,比如显示文件行数:​[root@iZuf6b1znamggglbunalhzZ ~]# wc -l /etc/passwd​     21 /etc/passwd 用file命令可以看到文件的编码格式、文件的类型[root@iZuf6b1znamggglbunalhzZ ~]# file /var/log/messages     /var/log/messages: UTF-8 Unicode text, with very long lines2 grep grep命令查找并显示字符所在的行,比如要想知道ssh服务监听的端口,可以查找ssh配置文件中Port所在的行,这里的P要大写[root@iZuf6b1znamggglbunalhzZ ~]# grep Port /etc/ssh/ssh_config     #   Port 22    Port所在的行默认是被注释掉的,此时,ssh使用的是默认的22端口,如果要知道Port在ssh配置文件的第几行,使用-n选项,​[root@iZuf6b1znamggglbunalhzZ ~]# grep -n Port /etc/ssh/ssh_config​​     41:#   Port 22​ grep 的-c选项显示所查找的字符串在文件中的行数​[root@iZuf6b1znamggglbunalhzZ ~]# grep -c localhost /etc/hosts​     2 grep的-v选项用来排除字符所在的行,这个选项常用来在显示进程时排除grep本身,比如查看sshd进程信息​[root@iZuf6b1znamggglbunalhzZ ~]# ps -ef | grep sshd​     root      1099     1  0 14:41 ?        00:00:00 /usr/sbin/sshd -D     root      1389  1099  0 14:42 ?        00:00:00 sshd: root@pts/0     root      2202  1408  0 15:00 pts/0    00:00:00 grep --color=auto sshd​[root@iZuf6b1znamggglbunalhzZ ~]# ps -ef | grep -v grep | grep sshd​     root      1099     1  0 14:41 ?        00:00:00 /usr/sbin/sshd -D     root      1389  1099  0 14:42 ?        00:00:00 sshd: root@pts/0 使用了-v选项后,命令的输出不再显示grep进程,排除了grep进程的干扰,输出的结果更容易理解,看起来也更一致,在脚本编程时经常用到。 grep也提供-r选项,在目录及其子目录的所有文件中搜索特定字符串​[root@iZuf6b1znamggglbunalhzZ ~]# grep -r *.sh /etc​     Binary file /etc/udev/hwdb.bin matches     /etc/NetworkManager/dispatcher.d/11-dhclient:    for f in $ETCDIR/dhclient.d/*.sh; do     /etc/bashrc:    for i in /etc/profile.d/*.sh; do     /etc/profile:for i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; do 如果要搜索的字符串比较复杂,中间由空格或者特殊字符,就将其放到单引号内,单引号内也可以写正则表达式,如下面这个例子匹配ntp1...,ntp2...,....等,这列的单引号也可以改成双引号,效果是一样的​[root@iZuf6b1znamggglbunalhzZ ~]# grep 'ntp[0-9].aliyun.com' /etc/ntp.conf​       restrict ntp1.aliyun.com nomodify notrap nopeer noquery       restrict ntp2.aliyun.com nomodify notrap nopeer noquery       restrict ntp3.aliyun.com nomodify notrap nopeer noquery       restrict ntp4.aliyun.com nomodify notrap nopeer noquery       restrict ntp5.aliyun.com nomodify notrap nopeer noquery       restrict ntp6.aliyun.com nomodify notrap nopeer noquery       server ntp1.aliyun.com iburst minpoll 4 maxpoll 10       server ntp2.aliyun.com iburst minpoll 4 maxpoll 10       server ntp3.aliyun.com iburst minpoll 4 maxpoll 10       server ntp4.aliyun.com iburst minpoll 4 maxpoll 10       server ntp5.aliyun.com iburst minpoll 4 maxpoll 10       server ntp6.aliyun.com iburst minpoll 4 maxpoll 103 sed sed是Linux系统中常用的流编辑器,sed这个名称是stream editor的缩写,这是一个面向行处理的工具,它以“行”为处理单位,针对每一行进行处理,处理后的结果会输出到标准输出, 这里用前面创建的test.txt文件来演示这个工具的几个用法,sed命令的格式是[address]action/argument/flags,地址用来选择操作的行,action是要做的操作,如新增、替换等,第三部分是操作的参数,不同的操作需要不同的参数,flags可以表示操作的范围,比如经常可以看到的g参数,表示全部的意思,会对整行中所有匹配的内容进行操作。​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# sed '3,5d' test.txt​      test 1:this is test1      test 2:this is test1      test 6:this is test1      test 7:this is test1      test 8:this is test1      test 9:this is test1      test 10:this is test1 上面命令中d命令是删除的信息,sed删除了文件中3至5行,将其输出到标准输出,也可用$通配符,匹配至文件结尾​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# sed '3,$d' test.txt​      test 1:this is test1      test 2:this is test1 sed删除了第三行至文件结尾,只保留了前两行数据 a命令用来新增一行,下面的命令在文件的末尾增加一行,增加行的内容就是a后面的字符串。​[root@iZuf6b1znamggglbunalhzZ ~]# sed '$a admin:x:1000:1000:admin:/home/admin:/bin/bash' /etc/passwd​       root:x:0:0:root:/root:/bin/bash       bin:x:1:1:bin:/bin:/sbin/nologin       daemon:x:2:2:daemon:/sbin:/sbin/nologin       adm:x:3:4:adm:/var/adm:/sbin/nologin       polkitd:x:999:998:User for polkitd:/:/sbin/nologin       sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin       postfix:x:89:89::/var/spool/postfix:/sbin/nologin       chrony:x:998:996::/var/lib/chrony:/sbin/nologin       nscd:x:28:28:NSCD Daemon:/:/sbin/nologin       tcpdump:x:72:72::/:/sbin/nologin       admin:x:1000:1000:admin:/home/admin:/bin/bash s命令用于在一行内做字符串替换,比如我们在安装linux操作系统后,根据安全的需要经常要做打开和关闭selinux,如果需要这个操作在系统重启后依然生效,需要编辑/etc/selinux/config文件,更改selinux值,最简单的办法是用vi打开这个文件,编辑一下存盘,高效一点的办法是用sed的s命令给,查找到配置问价中selinux的行,将disabled值替换为enforing,用-i选项可以直接编辑文件。​[root@iZuf6b1znamggglbunalhzZ ~]# sed 's/SELINUX=disabled/SELINUX=enforcing/' /etc/selinux/config​       # This file controls the state of SELinux on the system.       # SELINUX= can take one of these three values:       #     enforcing - SELinux security policy is enforced.       #     permissive - SELinux prints warnings instead of enforcing.       #     disabled - No SELinux policy is loaded.       SELINUX=enforcing       # SELINUXTYPE= can take one of three values:       #     targeted - Targeted processes are protected,       #     minimum - Modification of targeted policy. Only selected processes are protected.       #     mls - Multi Level Security protection.       SELINUXTYPE=targeted c操作用于改变一行的内容,比如下面的命令将改变了第一行的内容,c 后面的字符串是改变后的内容​[root@iZuf6b1znamggglbunalhzZ ~]# sed '1c abcdefg' /etc/passwd​       abcdefg       bin:x:1:1:bin:/bin:/sbin/nologin       daemon:x:2:2:daemon:/sbin:/sbin/nologin       adm:x:3:4:adm:/var/adm:/sbin/nologin       lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin       sync:x:5:0:sync:/sbin:/bin/sync       shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown       halt:x:7:0:halt:/sbin:/sbin/halt       mail:x:8:12:mail:/var/spool/mail:/sbin/nologin       operator:x:11:0:operator:/root:/sbin/nologin       games:x:12:100:games:/usr/games:/sbin/nologin       ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin       nobody:x:99:99:Nobody:/:/sbin/nologin       systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin       dbus:x:81:81:System message bus:/:/sbin/nologin       polkitd:x:999:998:User for polkitd:/:/sbin/nologin       sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin       postfix:x:89:89::/var/spool/postfix:/sbin/nologin       chrony:x:998:996::/var/lib/chrony:/sbin/nologin       nscd:x:28:28:NSCD Daemon:/:/sbin/nologin       tcpdump:x:72:72::/:/sbin/nologin4 AWK AWK 是一种Linux中处理文本文件的语言,具有强大的文本分析功能,它的名字来源于三位创始人 Alfred Aho,Peter Weinberger, 和 Brian Kernighan ,分别取了三位创始人的姓的首字符,和grep、sed一起被称为linux中三剑客之,三剑客之首就是 AWK。 awk经常用来打印文本中的特定列,比如我们在shell编程时经常需要获得某一网络接口的ip地址,先用ifconfig命令看一下网络接口信息​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ifconfig lo​lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10<host>        loop  txqueuelen 1000  (Local Loopback)        RX packets 103285  bytes 6961676 (6.6 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 103285  bytes 6961676 (6.6 MiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 用awk获取接口的ip地址,awk的命令一般卸载单引号内,最简单的命令向下面这样,第一部分是匹配模式,用来查找要操作的行,第二部分是{}内是要执行的操作,这里是打印第二列,$n是AWK的内置变量,$0表示所有列,其它的表示第n列。​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ifconfig lo|awk '/inet 1/{print $2}'​        127.0.0.1 在linux文件系统中,root(/)文件系统是非常重要的,root文件系统满了,会造成操作系统和软件故障,使用下面的命令可以查看linux root文件系统的剩余空间[root@iZ2ze0t8khaprrpfvmevjiZ ~]#  df -h |awk '/\/$/{print $4}'22G 上面的awk命令中,匹配模式是/\/$/,$配置一行的结尾,\为转义符,取消后面紧跟的/的特殊含义,这个配置模式的意思就是匹配以\结尾的行,看一下df -h的输出,有利于我们理解这个匹配模式,​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# df -h​        Filesystem      Size  Used Avail Use% Mounted on        devtmpfs        891M     0  891M   0% /dev        tmpfs           909M  476M  433M  53% /dev/shm        tmpfs           909M  484K  908M   1% /run        tmpfs           909M     0  909M   0% /sys/fs/cgroup        /dev/vda3        40G   19G   22G  47% /        /dev/vda2       100M  7.3M   93M   8% /boot/efi        tmpfs           182M     0  182M   0% /run/user/0    在df -h的输出中,只有root文件系统是以’/’结尾的,匹配到之后,打印第四行及时就是root文件系统的剩余空间,如果要显示结果更友好一点,也可以拼接一下字符串​[root@iZ2ze0t8khaprrpfvmevjiZ ~]#  df -h |awk '/\/$/{print "root filesystem left space is "$4}'​    root filesystem left space is 22G 有的企业一般要求在linux系统中,自定义用户的id要大于1000,在这种情况下用户id<1000的基本都是系统内置用户,如何查询系统中的内置用户,用下面的命令​[root@iZuf6b1znamggglbunalhzZ ~]# awk -F: '$3<1000{x++} END{print x}' /etc/passwd​        21 上面的命令中,-F设置分隔符,在这里我们不使用默认的空格作为分隔符,而是使用“:”作为分隔符,文件的第三列是用户id,只要第三列的值小于1000,x(起始值为0)就加一,文件扫描结束时,打印x值,就是文件中所有用户id小于1000的用户的数量。 我们创建linux用户时,经常不需要用户由执行shell命令的权限,把用户的shell只是为nologin,比如mysql用户,这样可以提高安全性,怎样取出系统中有执行命令权限的用户,用下面的命令:​[root@iZuf6b1znamggglbunalhzZ ~]# awk -F: '$7!~/nologin$/{print $1,$7}' /etc/passwd​        root /bin/bash        sync /bin/sync        shutdown /sbin/shutdown        halt /sbin/halt 同样要设置分隔符为“:”,,只要第七列不以“nologin”结尾,我们就打印第一列和第7列。 下面这个例子没有什么实用价值,只是演示一下awk的格式化输出和文本文件分析能里,我们打印出/etc/password的前三行的名字和uid,并且打印出行数[root@iZuf6b1znamggglbunalhzZ ~]# head -3 /etc/passwd | awk  'BEGIN{FS=":";print "name\tuid"}{print $1,"\t"$3}END{print "sum lines "NR}'        name    uid        root    0        bin     1        daemon  2        sum lines 3     命令用到awk的begin、end结构,begin后面紧跟的处理开始的动作,在这里,我们打印出列头name和uid,\t是tab键,后面的动作应用到文本处理过程中,打印出第1列和第三列,这两列之间也用tab键分隔,文本处理结束后进行的操作是打印一个汇总行,NR是AWK内置变量,表示处理的行数。     一台linux服务器,如果我们想知道它上面处于不同状态的tcp的数量怎么做,如果不用脚本,也可以手工处理,运行一下netstat -na命令,看一下tcp连接的状态,统计一下每种状态下连接的数量,这个需求也用下面的脚本也可以实现[root@iZuf6b1znamggglbunalhzZ ~]# netstat -na | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'        LISTEN 1        ESTABLISHED 2        TIME_WAIT 1     这个脚本有几个awk中比较高级的用法,需要解释一下,命令中的匹配模式是/^tcp/ ,即匹配以tcp开头的行,$NF是AWK的内置变量,表示行的最后一列,看一下netstat -na中以tcp开头的行最后一列是什么    [root@iZ2ze0t8khaprrpfvmevjiZ ~]# netstat -na|grep tcp        tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN        tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN        tcp        0     64 172.20.11.244:22        112.224.4.86:47071      ESTABLISHED        tcp        0      0 172.20.11.244:56566     100.100.30.26:80        ESTABLISHED        tcp        0      0 172.20.11.244:60892     100.100.18.120:443      TIME_WAIT   可以看到,最后一列正是连接的状态,S[$NF]定义一个数组,这个数组的元素是S[LISTEN], S[ESTABLISHED],数组元素的初始值是0,在文本的处理过程中,如果匹配到行,就将数组元素S[连接状态]的值加1,整个文本处理完后,输出数组内的每一个元素及其值。   awk和其它脚本命令组合起来,看一实现一些比较复杂的操作,比如杀掉一组进程​[root@iZuf6b1znamggglbunalhzZ ~]# ps -ef | grep httpd | awk {'print $2'} | xargs kill -9​

Oracle 21C 使用rpm安装

在数据库软件的安装中,Oracle数据库的安装是比较复杂的,尤其是在linux和unix操作系统上安装oracle数据库,涉及到很多步骤,交互式安装要配置xmanager连接,静默安装需要熟悉Oracle的原理、安装配置的步骤及响应文件,稍有不慎就会出错。 在oracle安装的过程中,最容易发生的错误就是安装过程中图形界面弹不出来,还有一个最容易忽视的地方就是hostname为默认设置,配置在ip loopback端口上,导致dbca和netca运行过程中报错导致安装终止,这个问题在交互式图形安装和静默安装下都存在。 因此, 在安装Oracle数据库时,尤其时第一次安装时,一定要仔细阅读oracle公司的相应操作系统安装文档,就是有安装经验的也要十分小心,否则,无意中搞错了容易忽视的细节,就会安装失败。 现在,这种情况得到了改善,oracle公司提供了rpm安装包,在很大程度上简化了数据库的安装,降低了数据库的安装门槛,避免了对xmanager图形管理系统的依赖。本文以oracle 21c的安装为例,在centos 8上使用rpm进行数据库的安装。 oracle提供了两个rpm包, oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm oracle-database-ee-21c-1.0-1.ol8.x86_64.rpm 使用这两个包,可以很容易完成数据库安装,这两个包都可以从oracle官网下载,下载是要注意操作系统的架构、版本。1 安装预安装包 安装Oracle数据库的需要安装的第一个包时preinstall包,可以运行rpm命令直接安装。​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# rpm -ivh oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm​      warning: oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY      error: Failed dependencies:              compat-openssl10 is needed by oracle-database-preinstall-21c-1.0-1.el8.x86_64              ksh is needed by oracle-database-preinstall-21c-1.0-1.el8.x86_64              libnsl is needed by oracle-database-preinstall-21c-1.0-1.el8.x86_64              nfs-utils is needed by oracle-database-preinstall-21c-1.0-1.el8.x86_64              xorg-x11-utils is needed by oracle-database-preinstall-21c-1.0-1.el8.x86_64              xorg-x11-xauth is needed by oracle-database-preinstall-21c-1.0-1.el8.x86_64 这里报错的原因是这个preinstall包依赖的包没有安装,造成依赖检查失败,解决的办法也简单,把它需要的安装包安装一下就可以了。我这里用的服务器是阿里云的ECS,安装操作系统时已经配置好了yam源,直接用yam软件安装上面提示的包。​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# yum install compat-openssl10​      Installed:        compat-openssl10-1:1.0.2o-3.el8.x86_64      Complete!​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# yum install compat-openssl10  ksh libnsl nfs-utils xorg-x11-utils xorg-x11-xauth​      Installed:        gssproxy-0.8.0-19.el8.x86_64         keyutils-1.5.10-9.el8.x86_64            ksh-20120801-254.el8.x86_64        libICE-1.0.9-15.el8.x86_64           libSM-1.2.3-1.el8.x86_64                libX11-xcb-1.6.8-5.el8.x86_64        libXcomposite-0.4.4-14.el8.x86_64    libXi-1.7.10-1.el8.x86_64               libXinerama-1.1.4-1.el8.x86_64        libXmu-1.1.3-1.el8.x86_64            libXrandr-1.5.2-1.el8.x86_64            libXt-1.1.5-12.el8.x86_64        libXtst-1.2.3-7.el8.x86_64           libXv-1.0.11-7.el8.x86_64               libXxf86dga-1.1.5-1.el8.x86_64        libXxf86misc-1.0.4-1.el8.x86_64      libXxf86vm-1.1.4-9.el8.x86_64           libdmx-1.1.4-3.el8.x86_64        libnsl-2.28-164.el8.x86_64           libverto-libevent-0.3.0-5.el8.x86_64    nfs-utils-1:2.3.3-46.el8.x86_64        rpcbind-1.2.5-8.el8.x86_64           xorg-x11-utils-7.5-28.el8.x86_64        xorg-x11-xauth-1:1.0.9-12.el8.x86_64      Complete! 从命令的输出可以看到,上面提示的缺失的包已经安装成功。解决了依赖问题后,重新安装上面的preinstall包。​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# rpm -ivh oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm​      warning: oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY      Verifying...                          ################################# [100%]      Preparing...                          ################################# [100%]      Updating / installing...         1:oracle-database-preinstall-21c-1.################################# [100%] preinstall包安装成功。2 安装oracle21C企业版 包名字里面ee的意思是企业版。      ​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# rpm -ivh oracle-database-ee-21c-1.0-1.ol8.x86_64.rpm​         warning: oracle-database-ee-21c-1.0-1.ol8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY         Verifying...                          ################################# [100%]         Preparing...                          ################################# [100%]         Updating / installing...            1:oracle-database-ee-21c-1.0-1     ################################# [100%]         [INFO] Executing post installation scripts...         [INFO] Oracle home installed successfully and ready to be configured.         To configure a sample Oracle Database you can execute the following service configuration script as root: /etc/init.d/oracledb_ORCLCDB-21c configure 根据上面的信息,在oracle包安装完成后,还运行了post installation 脚本,脚本运行完后,提示安装成功,可以配置数据库了。3 配置数据库 根据上面的安装提示,运行 /etc/init.d/oracledb_ORCLCDB-21c configure命令可以创建一个示例数据库,这个命令以root的身份运行。​[root@iZ2ze0t8khaprrpfvmevjiZ ~]# /etc/init.d/oracledb_ORCLCDB-21c configure​         Configuring Oracle Database ORCLCDB.         Prepare for db operation         8% complete         Copying database files         31% complete         Creating and starting Oracle instance         32% complete         36% complete         40% complete         43% complete         46% complete         Completing Database Creation         51% complete         安装过程中,在这个点停了很长时间,一度以为安装服务器死机了,查看ECS控制台,监控显示的结果如下: CPU的利用率并不高,云盘读写的100多兆,云盘的iops倒不是很高,从上面这个图来看,这个云盘读写100多兆显示为一条直线,长时间没有什么变化,重新打开一个终端会话,连接至这台ECS,用iostat查看,读流量和写流量并不是一直不变,而是一直都有变化,方才确认这台服务器没有死机。看来,ECS的监控还是有问题的,它的采样周期较长,短期的变化看不出来,如果只依据ECS的监控结果来判断,会得到错误的结论,看来,判断系统是否故障时,还需要使用多个工具,多种技术手段,综合多方面的信息审慎的进行判断,才不会得出错误的结论。         54% complete         Creating Pluggable Databases         58% complete         77% complete         Executing Post Configuration Actions         100% complete         Database creation complete. For details check the logfiles at:          /opt/oracle/cfgtoollogs/dbca/ORCLCDB.         Database Information:         Global Database Name:ORCLCDB         System Identifier(SID):ORCLCDB         Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details.         Database configuration completed successfully. The passwords were auto generated, you must change them by connecting to the database using 'sqlplus / as sysdba' as the oracle user. 数据库配置成功完成,密码自动产生,需要登录数据库设置成自己想要的密码,命令也输出了数据库信息: 全局数据库名称和实例标识符都是ORCLCDB。4 登录数据库 登录配置好的数据库,需要切换到oracle用户下,登录前需要设置几个环境变量       ​export ORACLE_HOME=/opt/oracle/product/21c/dbhome_1​​ export ORACLE_SID=ORCLCDB​ ​export CHARSET=AL32UTF8​​ export PATH=$PATH:$ORACLE_HOME/bin​ 可以手动运行上面的命令,也可以将上面的命令加到oracle用户家目录的.bash_profile文件中,这样每次登录用户会自动设置这几个环境变量。运行sqlplus命令就可以登录到数据库。​[oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$sqlplus / as sysdba​oracle 21C 不再支持非可插拔数据库,只支持可插拔容器数据库,配置数据库时自动创建了一个容器数据库和一个可插拔数据库。使用上面sqlplus命令登录的是容器数据库,使用下面命令切换到可插拔数据库。  SQL> alter session set container=ORCLPDB1;        Session altered.也可以切换会容器数据库,容器数据库的名字为cdb$root  SQL> alter session set container=cdb$root;        Session altered.用show命令可以查看当前的容器名SQL> show con_name;        CON_NAME        ------------------------------        CDB$ROOT配置完的数据库第一次启动时可插拔的数据库是打开的,处于open状态,如果重启了数据库实例,只有容器数据库是打开的,可插拔数据库则处于mount状态,使用下面命令可以打开可插拔数据库。SQL> alter pluggable database open;      Pluggable database altered.如果要直接登录可插拔数据库,oracle已经为我们创建了一个服务,oracle的监听器也对创建的服务进行监听。假设我们在可插拔数据库orclpdb1 上已经创建了名为test,密码为test123的用户,使用下面命令可以直接登录到可插拔数据库orclpdb1,后面的操作就和非可插拔数据库没什么区别了。  [oracle@iZ2ze0t8khaprrpfvmevjiZ ~]$ sqlplus test/test123@iZ2ze0t8khaprrpfvmevjiZ/orclpdb1

Linux使用技巧:使用vi编辑文本文件

在Linux文件系统上编辑文本文件,交互式方式下最常用的工具时vi和vim,vim不仅可以编辑文本文件,还具有程序编辑能力,具有很强的定制功能,具有语法高亮显示、多视窗编辑、代码折叠、支持插件等功能。我们在进行文本编辑时,通常不用刻意区分这两个工具,使用vi或vim都可以。 vi是linux命令行工具,不支持鼠标功能,要想使用vi,必须对vi的基本概念有所了解,掌握一定数量的基本操作。使用门槛比Windows操作系统上的记事本、写字板高了很多,不过功能也强大了很多。如果熟练掌握了,不但处理文件的效率可以得到很大的提高,也可以实现一些在Windows记事本和写字本中无法实现的功能。 1 基本概念 在vi中,首先要知道的是vi有三种操作模式,分别是命令模式、输入模式和底线命令模式。 在linux shell中键入命令vim filename后进入编辑器视图后,默认就会进入命令模式,此时键盘输入会被识别为一个命令,在命令模式下可以进行文本进行复制、粘贴、删除和查找等工作。 在命令模式下输入命令i,a,o,O等就进入了输入模式,屏幕下端显示--insert--,在输入模式下,用上下左右箭头移动贯标,用backspace键删除字符,键盘输入被认为是一个个字符。输入模式下按Esc键即返回命令模式。 在命令模式下输入:即进入底线命令模式,这种方式下可以进行存盘、退出、配置、查找等操作。 2 基本操作 使用vi编辑文本,最简单的方式是进入vi编辑器时,直接按i进入输入模式,这种模式下就和用windows的记事本差不多,效率稍微低了一点,不过至少能完成要做的事情。 如果要方面快捷的进行文本编辑,就要进入命令模式,这种模式下有很多方面快捷的命令可以使用,在效率、快捷方面会有很大幅度的提升。 命令模式下移动光标可以用上下左右箭头,也可使用快捷命令hjkl,分别想左、下、上、右移动光标,这四个键在键盘上的位置很容易定位,属于常用键,如果熟练了,比上下左右四个箭头会快一点。 如果我们要编辑的文件行数比较多,比如有个上千行,如果总是用上下键来移动光标,不只是效率低,时间长了,手都会麻的。vi提供几个快速移动光标的命令。nG移动光标到第n行,gg移动到光标到第一行,G移动光标到最后一行,这里的小写不能错。crtl+o返回光标的上一个位置。 命令模式下也可以进行文本编辑,x删除当前字符,dd可以删除一整行,dw删除一个单词。r替换当前字符,并不切换vi的操作模式。R进入替换模式,从当前字符开始覆盖。屏幕下方的--REPLACE--显示vi当前处于replace模式,就和word中的改写模式差不多,替换完了之后,记得按ESC键返回命令模式。  在命令模式下,也可以进行复制粘贴,yy键复制当前行,p将复制的内容粘贴到光标当前位置的下一行。如果要复制多行,在yy前面加上数字,比如复制三行,写成3yy。  想要撤掉所作的操作,用u命令,后面加数字可以撤销所作的n此操作。  要想查找某个字符字符串怎么弄,按下:进入底线命令模式,/字符串 回车即可从当前光标位置向后搜索字符串,搜索到了之后,按n继续搜索下一个,搜索到文件底部后会自动切换到文件头部继续搜索,如此不断循环。键入其它命令即结束当前搜索。  编辑完了,怎么退出vi,按下:键进入底线命令模式,q命令是退出,只在对文件没有更改时使用,如果对文件做了修改,要想不保存退出就用q!,!命令时强制退出。wq命令存盘退出,编辑的中途也可以按下:w进行存盘。 这里介绍的时vi的基本操作,掌握了这个操作之后,使用vi编辑文本会快很多,下一节介绍一点其它技巧,这个技巧使用得当可以进一步提供文本处理的效率。3 实用技巧3.1 设置数字显示在底线命令模式下,键入set number命令可以在每行的左面显示行号:这个命令在调试脚本时非常有用,脚本运行出错时会提示报错的行和列,有了这个命令,我们可以非常容易定位到报错的行,否则,看错了行,会给我们造成很大的困扰。3.2 在一行内快速移动光标在命令模式下,^命令可以快速移动行首,$命令可以快速移动到行尾,这两个命令同linux表达式行首行尾的标识符相同,应该很容易记住。3.3 查看文件的编码格式有时候我们需要知道文件的编码格式,该怎么操作?底线命令模式下键入set fileencoding命令即可。可以看到我现在编辑文件的编码格式时utf-8

使用obd安装oceanbase社区最新版

实验报告:Linux指令入门-文件与权限

这个实验虽然非常简单,只是一些最基本的linux文件与权限的操作,里面也有一些实用的技巧和容易忽视的细节,这个报告对此做了详细的记录和说明,对运维和管理应该稍有帮助,方便日后自己查阅,各位小伙伴也看一眼,或许有你平常不注意的地方。1 文件管理 创建、显示、移动、删除、重命名文件和目录是我们运维人员经常需要做的操作,涉及到了几个linux系统中的最基本的命令,正确使用这些命令是系统运维人员的最基本的要求。1.1 文件和目录显示一个最基本的操作时显示当前所在的目录,命令时pwd[root@iZuf6jb1biwnrz7sk87e7sZ ~]# pwd ​/root​这个命令应该时系统运维人员经常用的命令,做每个操作之前最好运行一下这个命令,看一在当前目录,然后再执行其它操作,尤其时使用相对路径删除文件或目录时,相对路径只以用户当前工作路径为起点的,如果当前路径不对,可能误删文件或目录,导致不可预料的后果。显示文件的命令时ls,这个命令有几个实用的选项,简单运行ls命令显示当前目录下的所有文件,不包含隐藏文件[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls当前目录下没有任何文件,加上-a选项,就可以显示当前目录下的隐藏文件了。[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -a​.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .cshrc  .pip  .pydistutils.cfg  .ssh  .tcshrc​. 表示当前目录,.. 表示上一级目录,.bash_progfile 是bash shell下当前用户的配置文件,我们可以在里面设置用户常用的环境变量。ls加-l选项可以显示文件或目录的详细信息[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -l​total 0​[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -la​total 44​​dr-xr-x---.  4 root root 4096 Jul 11 08:56 .​​dr-xr-xr-x. 18 root root 4096 Jul 11 08:55 ..​​-rw-------   1 root root   95 Jul 11 08:56 .bash_history​​-rw-r--r--.  1 root root   18 Dec 29  2013 .bash_logout​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bash_profile​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bashrc​​-rw-r--r--.  1 root root  100 Dec 29  2013 .cshrc​​drwxr-xr-x   2 root root 4096 Apr 26  2020 .pip​​-rw-r--r--   1 root root  206 Jul 11 08:55 .pydistutils.cfg​​drwx------   2 root root 4096 Apr 26  2020 .ssh​​-rw-r--r--.  1 root root  129 Dec 29  2013 .tcshrc​ls 的-t选项可以按照时间顺序排列文件[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -at​.  .bash_history  ..  .pydistutils.cfg  .pip  .ssh  .bash_logout  .bash_profile  .bashrc  .cshrc  .tcshrc​可以看出不是字母顺序,加l选项查看一下详细信息[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -alt​total 44​​dr-xr-x---.  4 root root 4096 Jul 11 08:56 .​​-rw-------   1 root root   95 Jul 11 08:56 .bash_history​​dr-xr-xr-x. 18 root root 4096 Jul 11 08:55 ..​​-rw-r--r--   1 root root  206 Jul 11 08:55 .pydistutils.cfg​​drwxr-xr-x   2 root root 4096 Apr 26  2020 .pip​​drwx------   2 root root 4096 Apr 26  2020 .ssh​​-rw-r--r--.  1 root root   18 Dec 29  2013 .bash_logout​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bash_profile​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bashrc​​-rw-r--r--.  1 root root  100 Dec 29  2013 .cshrc​​-rw-r--r--.  1 root root  129 Dec 29  2013 .tcshrc​-r选项可以反向显示[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -ra​.tcshrc  .ssh  .pydistutils.cfg  .pip  .cshrc  .bashrc  .bash_profile  .bash_logout  .bash_history  ..  .​从上面的输出可以看出,显示的顺序是按照字母顺序反向显示的。-R选项是递归显示当前目录,即显示每一级子目录的信息[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -Ral.:​total 44​​dr-xr-x---.  4 root root 4096 Jul 11 08:56 .​​dr-xr-xr-x. 18 root root 4096 Jul 11 08:55 ..​​-rw-------   1 root root   95 Jul 11 08:56 .bash_history​​-rw-r--r--.  1 root root   18 Dec 29  2013 .bash_logout​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bash_profile​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bashrc​​-rw-r--r--.  1 root root  100 Dec 29  2013 .cshrc​​drwxr-xr-x   2 root root 4096 Apr 26  2020 .pip​​-rw-r--r--   1 root root  206 Jul 11 08:55 .pydistutils.cfg​​drwx------   2 root root 4096 Apr 26  2020 .ssh​​-rw-r--r--.  1 root root  129 Dec 29  2013 .tcshrc​​./.pip:​​total 12​​drwxr-xr-x  2 root root 4096 Apr 26  2020 .​​dr-xr-x---. 4 root root 4096 Jul 11 08:56 ..​​-rw-r--r--  1 root root  252 Jul 11 08:55 pip.conf​​./.ssh:​​total 8​​drwx------  2 root root 4096 Apr 26  2020 .​​dr-xr-x---. 4 root root 4096 Jul 11 08:56 ..​​-rw-------  1 root root    0 Jul 11 08:55 authorized_keys​​[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll -a​​total 44​​dr-xr-x---.  4 root root 4096 Jul 11 08:56 .​​dr-xr-xr-x. 18 root root 4096 Jul 11 08:55 ..​​-rw-------   1 root root   95 Jul 11 08:56 .bash_history​​-rw-r--r--.  1 root root   18 Dec 29  2013 .bash_logout​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bash_profile​​-rw-r--r--.  1 root root  176 Dec 29  2013 .bashrc​​-rw-r--r--.  1 root root  100 Dec 29  2013 .cshrc​​drwxr-xr-x   2 root root 4096 Apr 26  2020 .pip​​-rw-r--r--   1 root root  206 Jul 11 08:55 .pydistutils.cfg​​drwx------   2 root root 4096 Apr 26  2020 .ssh​​-rw-r--r--.  1 root root  129 Dec 29  2013 .tcshrc​1.2 改变当前目录用cd命令改变用户的当前路径[root@iZuf6jb1biwnrz7sk87e7sZ ~]# cd /usr/local/etc[root@iZuf6jb1biwnrz7sk87e7sZ etc]# pwd​/usr/local/etc​已经进入到/usr/local/etc 路径下 cd ..可以进入到上级目录,cd ~ 可以进入到当前用户的home目录下,直接键入cd 也可以转到用户的home目录下[root@iZuf6jb1biwnrz7sk87e7sZ etc]# cd[root@iZuf6jb1biwnrz7sk87e7sZ ~]# pwd​/root​可以看到当前目录变为root用户的home目录1.3 文件的创建和调整用touch命令创建空文件[root@iZuf6jb1biwnrz7sk87e7sZ ~]# touch demo1.txt demo2.txtll是系统定义的别名,运行的命令是ls -l[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 0​​-rw-r--r-- 1 root root 0 Jul 11 09:02 demo1.txt​​-rw-r--r-- 1 root root 0 Jul 11 09:02 demo2.txt​也可用touch命令更给文件的调整时间,后面直接跟上要调整的文件名即可,[root@iZuf6jb1biwnrz7sk87e7sZ ~]# touch demo1.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 0​​-rw-r--r-- 1 root root 0 Jul 11 09:03 demo1.txt​​-rw-r--r-- 1 root root 0 Jul 11 09:02 demo2.txt​可以看到,demo1.txt的时间变为当前的时间,-r选项可以依据一个文件改变另一个文件的时间,比如要根据demo1.txt时间更改demo2.txt的时间,命令应该这样写[root@iZuf6jb1biwnrz7sk87e7sZ ~]# touch -r demo2.txt demo1.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 0​​-rw-r--r-- 1 root root 0 Jul 11 09:02 demo1.txt​​-rw-r--r-- 1 root root 0 Jul 11 09:02 demo2.txt​可以看到demo2.txt的时间已经改成了demo1.txt的时间。1.4 目录操作用mkdir命令创建目录,-p选项创建不存在的父目录[root@iZuf6jb1biwnrz7sk87e7sZ ~]# mkdir -p a/b/c/d[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -lR​.:​​total 4​​drwxr-xr-x 3 root root 4096 Jul 11 09:04 a​​-rw-r--r-- 1 root root    0 Jul 11 09:02 demo1.txt​​-rw-r--r-- 1 root root    0 Jul 11 09:02 demo2.txt​​./a:​​total 4​​drwxr-xr-x 3 root root 4096 Jul 11 09:04 b​​./a/b:​​total 4​​drwxr-xr-x 3 root root 4096 Jul 11 09:04 c​​./a/b/c:​​total 4​​drwxr-xr-x 2 root root 4096 Jul 11 09:04 d​​./a/b/c/d:​​total 0​可以看到创建a,b,c,d目录,a,b,c是d的各级父目录,也可以用tree命令查看,更直观一些,这个命令在有的linux系统上需要单独安装。[root@iZuf6jb1biwnrz7sk87e7sZ ~]# tree.​├── a​​│   └── b​​│       └── c​​│           └── d​​├── demo1.txt​​└── demo2.txt​​4 directories, 2 files​删除文件和目录一般实用rm 命令,-r选项可以删除目录及其各级子目录,以及目录级各级子目录下的文件,这个选项应该慎用,运行命令之前应该对要执行的操作有准确的认知,按下回车之前应该检查屏幕上的命令是否是自己要执行的,多余的空格可能造成不可挽回的后果,尤其是rm -rf * 之前,一定要确认当前的工作目录,不可键入命令后随手回车。[root@iZuf6jb1biwnrz7sk87e7sZ ~]# rm -rf demo*[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​drwxr-xr-x 3 root root 4096 Jul 11 09:04 a​​drwxr-xr-x 3 root root 4096 Jul 11 09:04 a​这个命令删除了所有名字以demo开头的文件和目录要删除目录a下面b子目录,命令应该这样写[root@iZuf6jb1biwnrz7sk87e7sZ ~]# rm -rf a/b[root@iZuf6jb1biwnrz7sk87e7sZ ~]# tree​.​​└── a​​1 directory, 0 files​可以看到已经删除了b目录下的文件及子目录。用cp命令拷贝文件及目录,-r选项拷贝文件及各个级别的子目录及文件。[root@iZuf6jb1biwnrz7sk87e7sZ ~]# cp -r c a/b/[root@iZuf6jb1biwnrz7sk87e7sZ ~]# tree​.​​├── a​​│   └── b​​│       └── c​​│           └── d​​└── c​​    └── d​​6 directories, 0 files​用mv命令移动文件和目录[root@iZuf6jb1biwnrz7sk87e7sZ ~]# touch a.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# mv a.txt b.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​​total 8​​​​drwxr-xr-x 3 root root 4096 Jul 11 09:07 a​​​-rw-r--r-- 1 root root    0 Jul 11 09:09 b.txt​​drwxr-xr-x 3 root root 4096 Jul 11 09:07 c​a.txt被移动到了b.txt,a.txt文件已经被删除。也可以移动目录[root@iZuf6jb1biwnrz7sk87e7sZ ~]# mv c a/b/c/d[root@iZuf6jb1biwnrz7sk87e7sZ ~]# tree​.​​├── a​​│   └── b​​│       └── c​​│           └── d​​│               └── c​​│                   └── d​​└── b.txt​​6 directories, 1 file​将当前目录下所有内容移动到/tmp目录下[root@iZuf6jb1biwnrz7sk87e7sZ ~]# mv ./* /tmp[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -l /tmp​total 27804​​drwxr-xr-x 3 root root     4096 Jul 11 09:07 a​​drwxr-xr-x 3 root root     4096 Jul 11 08:55 aliyun_assist_6E9FF212AC6DEC9207A0959101945F57​​-rw-r--r-- 1 root root 28453387 Jul 11 08:55 aliyun_assist_6E9FF212AC6DEC9207A0959101945F57.zip​​-rw------- 1 root root        0 Jul 11 08:55 AliyunAssistClientSingleLock.lock​​srwxrwxrwx 1 root root        0 Jul 11 08:55 argus.sock​​-rw-r--r-- 1 root root        0 Jul 11 09:09 b.txt​​drwx------ 3 root root     4096 Jul 11 08:55 systemd-private-16b1a8d8373a49e29d79467dc5322268-chronyd.service-SGpg13​​drwx------ 2 root root     4096 Jul 11 08:55 tmp.I0uHH4StYu​[root@iZuf6jb1biwnrz7sk87e7sZ ~]# tree /tmp/a​/tmp/a​​└── b​​    └── c​​        └── d​​            └── c​​                └── d​​5 directories, 0 files​1.5 文件重命名[root@iZuf6jb1biwnrz7sk87e7sZ ~]# touch demo1.txt demo2.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls​demo1.txt  demo2.txt​[root@iZuf6jb1biwnrz7sk87e7sZ ~]# rename demo DEMO *[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls​DEMO1.txt  DEMO2.txt​可以看到,当前目录下所有以demo开头的文件名字被改为以DEMO开头。[root@iZuf6jb1biwnrz7sk87e7sZ ~]# rename .txt .text *[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls​DEMO1.text  DEMO2.text​文件的后缀由txt改成了text。2 文件权限管理2.1 显示文件的权限信息[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -l /boot/​total 140804​​-rw-r--r--  1 root root   153187 Mar 18  2020 config-3.10.0-1062.18.1.el7.x86_64​​-rw-r--r--. 1 root root   152976 Aug  8  2019 config-3.10.0-1062.el7.x86_64​​drwxr-xr-x. 3 root root     4096 Apr 26  2020 efi​​drwxr-xr-x. 2 root root     4096 Apr 26  2020 grub​​drwx------. 5 root root     4096 Apr 26  2020 grub2​​-rw-------. 1 root root 57931787 Apr 26  2020 initramfs-0-rescue-20200426154603174201708213343640.img​​-rw-------  1 root root 18197454 Apr 26  2020 initramfs-3.10.0-1062.18.1.el7.x86_64.img​​-rw-------  1 root root 10734218 Apr 26  2020 initramfs-3.10.0-1062.18.1.el7.x86_64kdump.img​上面的显示信息中,第1位表示存档类型,d表示目录,-表示一般文件。第2~4位表示当前用户的权限(属主权限)。第5~7位表示同用户组的用户权限(属组权限)。第8~10位表示不同用户组的用户权限(其他用户权限)。第11位是一个半角句号.,表示SELinux安全标签。2.2 文件权限更改创建一个hello.sh 脚本文件[root@iZuf6jb1biwnrz7sk87e7sZ ~]# echo "echo 'Hello World'" > hello.sh[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO2.text​​-rw-r--r-- 1 root root 19 Jul 11 09:15 hello.sh​用chmod命令给脚本加上可执行权限[root@iZuf6jb1biwnrz7sk87e7sZ ~]# chmod u+x hello.sh  [root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO2.text​​-rwxr--r-- 1 root root 19 Jul 11 09:15 hello.sh​撤销可执行权限[root@iZuf6jb1biwnrz7sk87e7sZ ~]# chmod u-x hello.sh  [root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO2.text​​-rw-r--r-- 1 root root 19 Jul 11 09:15 hello.sh​也可以实用数字方式改变文件权限,4读,2为写,1为执行,744表示owner用于读、写、执行权限,组内其它成员和组外成员用于写权限[root@iZuf6jb1biwnrz7sk87e7sZ ~]# chmod 744 hello.sh  [root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO2.text​​-rwxr--r-- 1 root root 19 Jul 11 09:15 hello.sh​用户拥有文件的执行权限才能执行文件[root@iZuf6jb1biwnrz7sk87e7sZ ~]# bash hello.shHello World2.2 更改文件属主及属组查看当前用户[root@iZuf6jb1biwnrz7sk87e7sZ ~]# whoami​root​这个命令应该也是系统运维经常运行的命令,做任何操作之前先确认当前用户是必要的审慎行为,可以避免很多不必要的麻烦。创建一个文件[root@iZuf6jb1biwnrz7sk87e7sZ ~]# touch test.txt创建test用户[root@iZuf6jb1biwnrz7sk87e7sZ ~]# adduser test为test用户创建密码[root@iZuf6jb1biwnrz7sk87e7sZ ~]# passwd test​Changing password for user test.​​New password:  ​​BAD PASSWORD: The password contains the user name in some form​​Retype new password:  ​​passwd: all authentication tokens updated successfully.​创建admin用户[root@iZuf6jb1biwnrz7sk87e7sZ ~]# adduser admin[root@iZuf6jb1biwnrz7sk87e7sZ ~]# passwd admin​Changing password for user admin.​​New password:  ​​BAD PASSWORD: The password contains the user name in some form​​Retype new password:  ​​passwd: all authentication tokens updated successfully.​用chown命令更改文件的属主[root@iZuf6jb1biwnrz7sk87e7sZ ~]# chown test test.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ls -ltotal 4​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root root  0 Jul 11 09:11 DEMO2.text​​-rwxr--r-- 1 root root 19 Jul 11 09:15 hello.sh​​-rw-r--r-- 1 test root  0 Jul 11 09:18 test.txt​文件的属主已经被改为test,属组仍然是root同时更改属组和属主,仍然用chown命令,像下面这样写[root@iZuf6jb1biwnrz7sk87e7sZ ~]# chown admin:admin test.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​-rw-r--r-- 1 root  root   0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root  root   0 Jul 11 09:11 DEMO2.text​​-rwxr--r-- 1 root  root  19 Jul 11 09:15 hello.sh​​-rw-r--r-- 1 admin admin  0 Jul 11 09:18 test.txt​只改文件的属组可以用chgrp[root@iZuf6jb1biwnrz7sk87e7sZ ~]# chgrp root test.txt[root@iZuf6jb1biwnrz7sk87e7sZ ~]# ll​total 4​​-rw-r--r-- 1 root  root  0 Jul 11 09:11 DEMO1.text​​-rw-r--r-- 1 root  root  0 Jul 11 09:11 DEMO2.text​​-rwxr--r-- 1 root  root 19 Jul 11 09:15 hello.sh​​-rw-r--r-- 1 admin root  0 Jul 11 09:18 test.txt​

oralce orapwd创建口令文件

1 环境和数据1.1 数据库版本select * from v$version; BANNER Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production CORE 11.2.0.4.0 Production TNS for Linux: Version 11.2.0.4.0 - Production NLSRTL Version 11.2.0.4.0 - Production1.2 归档日志路径SQL> archive log list; Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 1 Next log sequence to archive 1 Current log sequence 11.3 备份检验表格SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss'; Session altered. SQL> select * from test_control; CURR_TIME ------------------- 2019-07-21 16:25:33 2019-07-27 09:32:21 2019-07-27 09:33:391.4 rman参数配置[oracle@orclserv1 ~]$ rman target / Recovery Manager: Release 11.2.0.4.0 - Production on Sat Aug 3 10:55:59 2019 connected to target database: ORCL11G (DBID=1118535928) RMAN> show all; using target database control file instead of recovery catalog RMAN configuration parameters for database with db_unique_name ORCL11G are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/db_1/dbs/snapcf_orcl11g.f'; # def2 手动备份和恢复2.1 手动备份控制文件RMAN> backup current controlfile format '/home/oracle/backup/c_%d_%I_%T_%s'; Starting backup at 03-AUG-19 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set including current control file in backup set channel ORA_DISK_1: starting piece 1 at 03-AUG-19 channel ORA_DISK_1: finished piece 1 at 03-AUG-19 piece handle=/home/oracle/backup/c_ORCL11G_1118535928_20190803_8 tag=TAG20190803T112625 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 03-AUG-19使用backup current controlfile 命令手动备份当前控制文件,格式选项里面各项含义如下: %d:数据库名称, %I :dbid , %T: 日期, %s: 备份集号 在backup命令的输出里可以看到备份文件 /home/oracle/backup/c_ORCL11G_1118535928_20190803_82.2 在表中增加一些数据SQL> insert into test_control select sysdate from dual; 1 row created. SQL> insert into test_control select sysdate from dual; 1 row created. SQL> commit; SQL> select * from test_control; CURR_TIME ------------------- 2019-07-21 16:25:33 2019-07-27 09:32:21 2019-07-27 09:33:39 2019-08-03 11:52:17 2019-08-03 11:53:052.2 删除控制文件,模拟控制文件丢失[oracle@orclserv1 backup]$ rm /u01/app/oracle/oradata/orcl11g/control01.ctl [oracle@orclserv1 backup]$ rm /u01/app/oracle/fast_recovery_area/orcl11g/control02.ctl删除后,数据库已经不能正常关闭SQL> shutdown immediate; ORA-00210: cannot open the specified control file ORA-00202: control file: '/u01/app/oracle/oradata/orcl11g/control01.ctl' ORA-27041: unable to open file Linux-x86_64 Error: 2: No such file or directory Additional information: 3使用abort模式强制关闭数据库SQL> shutdown abort; ORACLE instance shut down.2.3 使用rman恢复控制文件以mount方式启动数据库RMAN> startup nomount; Oracle instance started Total System Global Area 1653518336 bytes Fixed Size 2253784 bytes Variable Size 1006636072 bytes Database Buffers 637534208 bytes Redo Buffers 7094272 bytes恢复控制文件RMAN> restore controlfile from '/home/oracle/backup/c_ORCL11G_1118535928_20190803_8'; Starting restore at 03-AUG-19 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=19 device type=DISK channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 output file name=/u01/app/oracle/oradata/orcl11g/control01.ctl output file name=/u01/app/oracle/fast_recovery_area/orcl11g/control02.ctl Finished restore at 03-AUG-19从命令输出中可以看到控制文件已经恢复到数据库配置文件中设置的路径中,接下来需要mount数据库。RMAN> alter database mount; database mounted released channel: ORA_DISK_1恢复数据库RMAN> recover database; Starting recover at 03-AUG-19 Starting implicit crosscheck backup at 03-AUG-19 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=19 device type=DISK Crosschecked 3 objects Finished implicit crosscheck backup at 03-AUG-19 Starting implicit crosscheck copy at 03-AUG-19 using channel ORA_DISK_1 Finished implicit crosscheck copy at 03-AUG-19 searching for all files in the recovery area cataloging files... no files cataloged using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/oradata/orcl11g/redo01.log archived log file name=/u01/app/oracle/oradata/orcl11g/redo01.log thread=1 sequence=1 media recovery complete, elapsed time: 00:00:00 Finished recover at 03-AUG-19以restlogs选项打开数据库RMAN> alter database open RESETLOGS; database opened RMAN>登陆数据库检查数据,已经恢复到数据库关闭前的数据SQL> select * from test_control; CURR_TIME ------------------- 2019-07-21 16:25:33 2019-07-27 09:32:21 2019-07-27 09:33:39 2019-08-03 11:52:17 2019-08-03 11:53:05

oracle数据库控制文件的备份和恢复之一手动备份和恢复

1 环境准备,创建测试表,准备表中数据SQL> select * from v$version;BANNEROracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit ProductionPL/SQL Release 11.2.0.4.0 - ProductionCORE 11.2.0.4.0 ProductionTNS for Linux: Version 11.2.0.4.0 - ProductionNLSRTL Version 11.2.0.4.0 - Productionsql>CREATE TABLE "SYS"."TEST_CONTROL"("CURR_TIME" DATE)更改会话显示时间格式,查看表数据时可以看到区别。SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';Session altered.SQL> select * from test_control;CURR_TIME2019-07-21 16:25:332 查看控制文件配置SQL> show parameter controlNAME TYPE VALUEcontrol_file_record_keep_time integer 7control_files string /u01/app/oracle/oradata/orcl11 g/control01.ctl, /u01/app/orac le/fast_recovery_area/orcl11g/ control02.ctlcontrol_management_pack_access string DIAGNOSTIC+TUNING3 手动备份控制文件SQL> alter database backup controlfile to trace;Database altered. 备份的控制文件创建脚本位于以下文件中NAME VALUEDefault Trace File /u01/app/oracle/diag/rdbms/orcl11g/orcl11g/trace/orcl11g_ora_2175.trc4 插入几行数据,切换几次日志,多产生几个归档日志SQL> insert into test_control select sysdate from dual;SQL> commit;SQL> alter system switch logfile;System altered.SQL> insert into test_control select sysdate from dual;SQL> commit;SQL> alter system switch logfile;System altered.SQL> select * from test_control;CURR_TIME2019-07-21 16:25:332019-07-27 09:32:212019-07-27 09:33:395 删除控制文件,模仿控制文件全部丢失,关闭数据库[oracle@orclserv1 orcl11g]$ rm /u01/app/oracle/oradata/orcl11g/control01.ctl[oracle@orclserv1 orcl11g]$ rm /u01/app/oracle/fast_recovery_area/orcl11g/control02.ctl ##查询v$database视图,系统提示打不开控制文件,操作系统错误为没有文件和目录,以immediate方式不能关闭数据库,报同样错误,这时,可以以abort方式关闭数据库。SQL> select CHECKPOINT_CHANGE#, CURRENT_SCN from v$database;select CHECKPOINT_CHANGE#, CURRENT_SCN from v$database *ERROR at line 1:ORA-00210: cannot open the specified control fileORA-00202: control file: '/u01/app/oracle/oradata/orcl11g/control01.ctl'ORA-27041: unable to open fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 3SQL> shutdown immediate;ORA-00210: cannot open the specified control fileORA-00202: control file: '/u01/app/oracle/oradata/orcl11g/control01.ctl'ORA-27041: unable to open fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 3SQL> shutdown abort;ORACLE instance shut down.SQL>6 以nomount方式启动数据库,用备份的脚本重新创建控制文件,恢复数据库,以resetlogs方式打开数据库1) 以nomount方式启动数据库SQL> startup nomount;ORACLE instance started.Total System Global Area 1653518336 bytesFixed Size 2253784 bytesVariable Size 1006636072 bytesDatabase Buffers 637534208 bytesRedo Buffers 7094272 bytes2) 运行备份的脚本重新创建控制文件CREATE CONTROLFILE REUSE DATABASE "ORCL11G" RESETLOGS ARCHIVELOGMAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292LOGFILE GROUP 1 '/u01/app/oracle/oradata/orcl11g/redo01.log' SIZE 50M BLOCKSIZE 512, GROUP 2 '/u01/app/oracle/oradata/orcl11g/redo02.log' SIZE 50M BLOCKSIZE 512, GROUP 3 '/u01/app/oracle/oradata/orcl11g/redo03.log' SIZE 50M BLOCKSIZE 512-- STANDBY LOGFILEDATAFILE '/u01/app/oracle/oradata/orcl11g/system01.dbf', '/u01/app/oracle/oradata/orcl11g/sysaux01.dbf', '/u01/app/oracle/oradata/orcl11g/undotbs01.dbf', '/u01/app/oracle/oradata/orcl11g/users01.dbf'CHARACTER SET WE8MSWIN1252;SQL> select instance_name, status from v$instance;INSTANCE_NAME STATUSorcl11g MOUNTED ##数据库已进入mount状态SQL> !ls /u01/app/oracle/oradata/orcl11g/control01.ctl/u01/app/oracle/oradata/orcl11g/control01.ctlSQL> !ls /u01/app/oracle/fast_recovery_area/orcl11g/control02.ctl/u01/app/oracle/fast_recovery_area/orcl11g/control02.ctl### 已在原来的位置创建了控制文件3) 打开数据库SQL> alter database open;alter database open*ERROR at line 1:ORA-01589: must use RESETLOGS or NORESETLOGS option for database openSQL> alter database open resetlogs;alter database open resetlogs*ERROR at line 1:ORA-01194: file 1 needs more recovery to be consistentORA-01110: data file 1: '/u01/app/oracle/oradata/orcl11g/system01.dbf'SQL> alter database open readonly;alter database open readonly *ERROR at line 1:ORA-02288: invalid OPEN modeSQL> alter database open read only;alter database open read only各种方式均不能打开数据库,以restlog模式打开需要恢复4)恢复数据库recover database using backup controlfile until cancel;ORA-00279: change 971431 generated at 07/27/2019 09:34:16 needed for thread 1ORA-00289: suggestion :/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_10_%u_.arcORA-00280: change 971431 for thread 1 is in sequence #10Specify log: {=suggested | filename | AUTO | CANCEL}/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqbjrmd_.arcORA-00310: archived log contains sequence 9; sequence 10 requiredORA-00334: archived log:'/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqbjrmd_.arc'ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error belowORA-01194: file 1 needs more recovery to be consistentchange 971431 for thread 1 is in sequence #10不在归档日志中,应该位于在线日志中,这种方式的数据库恢复不检查在线日志,手动指定在线日志即可恢复。SQL> recover database using backup controlfile until cancel;ORA-00279: change 971431 generated at 07/27/2019 09:34:16 needed for thread 1ORA-00289: suggestion :/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_10_%u_.arcORA-00280: change 971431 for thread 1 is in sequence #10Specify log: {=suggested | filename | AUTO | CANCEL}/u01/app/oracle/oradata/orcl11g/redo01.log ##手动指定在线日志位置Log applied.Media recovery complete.SQL> alter database open resetlogs; ## 以resetlog方式打开数据库Database altered.SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';Session altered.SQL> select * from test_control;CURR_TIME2019-07-21 16:25:332019-07-27 09:32:212019-07-27 09:33:39 ####检查表test_control中的数据,同控制文件删除前相同SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/app/oracle/oradata/orcl11g/temp01.dbf' SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;2Tablespace altered.临时文件需要重新创建

简单实用的数据建模工具PDManer

PDManer是一款国产的开源数据建模工具,这款开源软件已经有它有一个更高大上一点的名字,叫做元数建模工具,这款软件已经发布了3个版本,每个版本除了功能的变化升级之外,比较奇怪的是名字也一直在改变,第一个重大的版本叫做PDMan2.2.0(Physical Data Model Manager),目标是做成国产的PowerDesigner后面的版本名字就改了,叫做CHINER元数建模3.0,看着名字猜测一下大概是中国的ER(实体关系)软件的意思吧,比较吻合国产化的大环境,不知到起这个名字的真实意图是否是这样。最新的版本叫做PDManerV4.0.0,这个名字好像和前面第一个的名字又有了一点联系,不过后面的er是什么意思,着实令人费解。随着版本的升级,名字也不断的变化,这个软件大概还没有明确自己的方向和定位,找不到一个号的名字。 在百度上搜索PDman,能看到的是诸如“能与PowerDesigner 媲美的数据建模工具”,“可以取代Power Designer的数据建模工具”之类,看来这款软件经常和业界的标杆Power Designer相比较,功能也比较类似,如果能在数据建模工具中替代PD,这个开源的国产工具很可能成为大多数国内数据建模人员的首选,毕竟是免费的。1 软件的下载安装运行 最新版本PDManerV4.0.0支持Windows、mac、linux操作系统,也支持国产操作系统,Windows操作系统的下载安装比较简单,下载地址在这里:https://gitee.com/robergroup/pdmaner/releases其中Windows既有绿色免安装版,也有安装版,下载绿色安装版,解压后接入解压目录双击PDManer”,就可以启动应用了。2 PDManer基本概念 这个软件运行以后的界面像下图这个样子:看个这了界面,不知各位的第一感觉是什么,看来这个软件的UI走的是清爽简洁的风格,首页还有操作手册的连接,如果不看操作手册,恐怕很多人不知道怎么进行下一步。看了看操作手册,首先的这个软件关于元数建模的概念有个初步的了解,然后才能开始试用。新建一个项目,打开这个新建的项目后软件的界面是下图这个样子: 从左边的导航图里可以看到这个软件有四大类功能,模型、数据域、代码生成器和版本管理。 模型和数据域与数据建模有关,是运用这个软件必须理解的基本概念,这两个概念中,模型的概念比较好理解,对应的数据库的物理模型,如表、关系、视图,也包含数据字典。数据域的概念则理解起来难度大一些,百度一下数据域里面的解释各种各样,令人莫衷一是。好在我们这个只需要理解这个软件里面的数据域的概念,从帮助手册里找一下,左侧栏数据域下面有三个子项数据类型、数据域和数据库。手册里对这三个子项的是这样解释的: 1)基础类型 系统自带常见基础数据类型(如字串,小数,日期等),同时用户也可以根据自己的需要添加新的数据类型。数据类型只有类型,无长度,长度需要用户在定义字段时设置或者在数据域中进行模板化设置。 2) 数据域 数据域是在数据类型的基础上,基于当前项目,定义有一定业务含义的数据类型,例如我们定义ID为32位长度的字串,金额为18位整数+小数点后保留6位的小数,名称为250位长度的字串等,主要用于快速设置字段的数据类型。 3)数据库 数据库子项显示软件支持的数据库类型及系统默认的数据库,这个不能更改默认数据库的类型。 基础类型和数据域有一定的联系,大概意思是基础类型没有业务含义,只是项目中的用到的基础数据类型,对长度也不做限制,数据域具有一定的业务含义。举个简单的例子,整数时基础类型,电话号码(11为整数)则是一个数据域项。3 初始化项目环境 使用这个工具之前,一定要读一下操作手册,否则不但概念理解,基本操作也不容易完成。比如,在进行数据表管理之间要初始化一下项目环境,否侧建表的时候会有一些默认的必选字段不知道从哪里删除。 点击设置菜单这里表设置,默认会有几个缺省字段,把不需要的删除,也可以添加自己项目需要的必选字段。这里添加一个字段作为演示4 数据表管理 数据表管理是数据建模工具的基本操作,这款工具这一点做的不错,点击左侧栏模型,鼠标移至数据表,按下右键,可以看到能够执行的操作:点击新增数据表,弹出建表导航输入代码和显示名称,点击确定从左侧导航栏可以看到数据表已经建立,双击建立的数据表就可以对表进行操作看到这里的create_date 列不能删除,增加三列,一列为id(员工id),一列为dept_id(部门ID),一列为员工姓名。列的数据类型里可以选择数据域,选择后软件自动填入数据类型和长度,也可以选择数据类型,自己填入长度。填入列信息后点击保存。点击上面的数据代码,可以看到为各种数据库生成的代码:5 数据导入功能 这个软件也有一定的逆向能力,可以从数据库中抽取表定义,点击左上方导入菜单,选择从数据库导入,根据导航就可以导入数据库现有表的定义,这了以myslq的sakila数据库作为演示,导入后效果是这样的:示例数据库的表都导入进来了,看一下关系图:sakila数据库有些表之间是有外键关联的,这个外键关联关系并没有导入,需要手工绘制。看一下开源数据库管理软件DBeaver的ER关系图吧:这了表之间的关联关系都得到了直观的显示,主键也进行了加粗标识。希望PDManer以后在这个功能方面进行加强和完善,提供更方面实用的功能。

WPS表格处理小技巧

WPS的表格功能十分强大,操作和微软的Excel基本相同,这里的小技巧在excel中同样适用。1 示例表格这里的示例表格比较简单,不过用来说明我们这里用到的小技巧足够了。2 合并name和owner列产生一个新列 一个最简单也是经常遇到的需求是将表格中的两列或几列合并成一个列,并以适当的分割符分割,比如本例中将name和owner合并成一列,中间以空格分隔,这个需求实现起来非常简单,使用concat函数即可,分隔符也可以根据需要选择,操作步骤如下1) 在owner的右边插入一列2) 在加入列的第一行单元个内输入公式=CONCAT(A2," ",B2),详见下图按回车可以看到这个单元格经过公式计算后的值选中公式所在单元格,向下拖动复制公式可以看到操作后的结果,列中的每行都是name和owner的字符串的合并,中间以空格分隔,如果用其它的分割符,只要更改中间双引号内的字符即可,比如改成.,公式改成这个样子,复制公式后结果见下图:concat函数支持多个参数,可以生成更复杂的字符串,我们在新增列的右侧在增加一列,里面填入更复杂的字符串,公式这样写:=CONCAT(A2,",",B2,"->",C2),复制后的效果如下图:可以看到concat函数的用处很大,上面我们从name、owner和我们刚加的新列生成了一个更复杂的列,并且选用了不同的分隔符。3 从偶数行或奇数行生成新列假设我们的表格内owner不是一列,而是在name下面的一行,像下面这个图这样:要把owner行中的数据取出来,生成单独的一列,插入到name行的右侧,应该怎么操作?利用wps的公式完成这个操作其实也不复杂,也用不着输入公式,只需要点点鼠标,拖动一个就能实现上面需求:1) 在owner的右侧插入一列,点击该列第一行单元格输入列名,这里是“新加Owner列”,点击该列第二行的单元格输入=后,点击第一列中第三行的单元格,可以看到生成的公式。用鼠标选择新加owner列的第二行和第三行,如下图:用鼠标移至选中框的右下角,直到出现一个小+号,向下拖动加号复制公式,复制后的结果如下图所示:可以看到,owner行的数据都复制到新加列的相应行中,新加列的数据符合我们的预期。4 删除符合某些条件的行继续上面的例子,我们已经生成了新的owner列,原先owner行中的数据已经成为数据,没有保留的需要,最好能把它删去。删去这些行的第一步是要新成列不在依赖原来行中的数据,要实现这个需求,复制列中的公式,选择选择性粘贴,粘贴为数值即可粘贴完成后,点击相应单元格,显示的是数值,不再是公式要想删除原来owner,先需要确定删除的条件,这个我选择新加的owner列,删除这个列中数据为空值所在的行,将鼠标移至要操作列的上方,等到出现向下的小箭头时按下鼠标左键选择整列按ctrl+g键,弹出定位菜单点击空值后,再点击定位可以看到定位到了所有的owner值为空的行,将鼠标移至任意一个选中定位的行上,点击右键删除整行,操作完成后,显示如下图:可以看到,删除的结果符合预期。从这几个小技巧来看,wps表格处理的复制功能十分强大,尤其在公式复制方面,如果能够充分理解公式和公式的复制,可以简化很多日常的繁琐操作,提高工作的效率。比如在上述示例中,我们可以成组的复制公式,改成区任意间隔行的值,只要间隔行符合一定的模式,就可以通过简单的公式复制操作实现,避免重复繁琐的操作。

使用OSS上传下载文件

阿里云OSS是阿里提供的海量云存储服务,同时安全可靠,可靠性远高于本地存储,在日常中运用最多的可能还是文件存储。OSS提供各种操作方式,可以简单方便的上传下载文件。1 oss的购买 oss对新用户提供3个 月100G的免费试用,40g的半年套餐也十分优惠,购买页面如下:选择合适的套餐,如果是新用户会有100G的免费3个月套餐,免费套餐购买前需要清空账户以前欠费,这里要注意的地域的选择,如果自己以前有ECS或者RDS等实例,尽可能选择和这些实例同一地域,这样就可以通过内网访问OSS进行上传下载,如果选择了不同地域的产品,就需要外网访问了,可能会增加不必要的费用。2 访问OSS控制台,创建bucket,配置安全策略登录阿里云账号,访问控制台,可以看到自己已经购买的所有阿里云产品点击OSS存储,可以跳转到OSS管理控制台点击左侧导航栏的bucket列表,可以看到已经创建的bucket,在这里也可以创建新的bucket,点击bucket名称可以查看bucket的详细信息,也可以创建目录,设置权限等创建目录比较简单,直接点击新建目录,权限管理则稍微复杂一点,有多种设置和选择,可以根据自己的需要灵活设置。首先要设置的Bucket ACL,界面如下图所示bucket ACL的设置有三个选项,选择私有则对bucket的所有访问都需要经过身份验证,选择公共读则允许匿名用户对bucket进行读,公共读写则允许匿名用户对bucket进行读写。私有模式下无论是上传还是下载比其它两种模式下复杂了不少,本文主要是接受这种模式下的操作。3 使用oss控制台上传下载文件使用控制台上传下载文件操作非常简单,点击bucket管理界面左侧的导航栏的文件管理上图中可以看到bucket的各个文件夹和文件,点击进入文件要上传的目录中,点击上传文件可以看到当前上传到的目录,可以将文件和目录直接拖到待上传文件区域上传,点击扫描文件则可以从本地选择要上传的文件,点击扫描文件夹则可以选择要上传的文件夹。下载文件操作也比较简单直接点击文件右侧的更多后再点击下载即可。也可以选择多个文件,点击上方的批量操作,进行批量下载。4 使用ossutil进行下载上传 ossutil是阿里云提供的命令行工具,支持linux、windows、mac等各种操纵系统,可以执行包括上传、下载在内的各种oss操作。4.1 ossutil的下载及安装ossutil的下载地址在这里https://help.aliyun.com/document_detail/120075.html 根据自己的操作系统选择合适的安装包,点击相应的连接即可下载相应平台的安装包,linux64为平台下的安装包是二机制文件,下载后上传到liunx服务器中,移动到可执行文件的目录下比如/usr/local/bin后,加上执行权限即可执行。安装完后检查一下可执行文件ossutil64的权限及位置[root@iZ2ze0t8khaprrpfvmevjiZ bin]# ls -l /usr/local/bin/ossutil64 ​-rwxr-xr-x 1 root root 10459836 Jun 29 17:11 /usr/local/bin/ossutil64​4.2 创建一个配置文件 由于我这里的bucket ACL的设置值为私有,对bucket的读和写都要进行身份验证,如果每次操作都输入连接及验证信息操作起来比较繁琐。想要避免重复输入,可以创建一个配置文件,每次对bucket操作时使用这个配置就可以省去频繁输入endpoint、AccessKey ID、AccessKey Secret的痛苦了,这些都很不容易记住。创建配置文件之前先准备一下上面的信息:endpoint在上面的bucket overview页面上可以看到,由于我这里的ECS和OSS处于同一地域,使用内网的地址即可。access key的信息在上面的页面上获取,点击页面右上角的头像,后可以看到管理access key选项,点击一下,就进入上图的访问key管理界面,如果这里没显示访问key则须要创建一个,如果已有访问key则点击右面的查看key,获取手机验证码并输入后访问key id和密钥会显示在屏幕上。获取必要的信息后可以创建配置文件了,使用ossutil工具以交互式方式创建,也可以直接编辑文件或者使用ossutil命令带选项方式创建[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ossutil64 config​The command creates a configuration file and stores credentials.​​      Please enter the config file name,the file name can include path(default /root/.ossutilconfig, carriage return will use the default file. If you specified this option to other file, you should specify --config-file option to the file when you use other commands):my_oss​​      For the following settings, carriage return means skip the configuration. Please try "help config" to see the meaning of the settings​​      Please enter language(CH/EN, default is:EN, the configuration will go into effect after the command successfully executed):CH​​      Please enter endpoint:threemonth.oss-cn-beijing-internal.aliyuncs.com​​      Please enter accessKeyID:LTAI5t725qPPeJhpsCoM3epR​​      Please enter accessKeySecret:****************************​​      Please enter stsToken:​根据提示输入配置文件名,语言(cn或者ch),access key id, access key 密钥,stsToken则不必输入直接回车即可。4.3 使用ossutil工具bucket的管理、文件上传下载显示bucket[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ossutil64 ls -s --config-file my_oss    ​  Error: oss: service returned error: StatusCode=403, ErrorCode=SignatureDoesNotMatch, ErrorMessage="The request signature we calculated does not match the signature you provided. Check your key and signing method.", RequestId=62BCF78D4CABF13632A1FBEE​这里的错误信息显示的是signature不匹配,查询阿里官网得知endpoint信息是不含bucket的,从显示endpoint的图里也可以看到,上面创建配置文件时实际输入的bucket地址,用vi编辑一下配置文件,去掉endpoint里的bucket信息再执行命令[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ossutil64 ls --config-file my_oss​CreationTime                                 Region    StorageClass    BucketName​​2022-06-29 17:29:59 +0800 CST        oss-cn-beijing        Standard    oss://threemonth​​Bucket Number is: 1​​0.173934(s) elapsed​可以看到创建时间,区域及bucket名称上传一个文件[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ossutil64 cp gdb.log oss://threemonth/upload --config-file my_ossSucceed: Total num: 1, size: 1,925. OK num: 1(upload 1 files).average speed 25000(byte/s)0.080086(s) elapsed文件没有上传到upload目录下,反而上传到了bucket根目录下,文件名也改为了upload,要想上传到指定目录下,目录名后面需要加上‘/’,如下所示[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ossutil64 cp gdb.log oss://threemonth/upload/ --config-file my_ossSucceed: Total num: 1, size: 1,925. OK num: 1(upload 1 files).average speed 25000(byte/s)文件已经上传到了upload目录下,查看文件详细信息,可以看到文件的url,使用wget在ECS中下载一下试试 [root@iZ2ze0t8khaprrpfvmevjiZ ~]# wget https://threemonth.oss-cn-beijing.aliyuncs.com/upload/%E9%82%93%E4%B8%BD%E5%90%9B%20-%20%E4%BD%86%E6%84%BF%E4%BA%BA%E9%95%BF%E4%B9%85.mp3?Expires=1656553431&OSSAccessKeyId=TMP.3KewYToct7R9i9kETqz4LjLRb59FNXi82t6DgBikZU7ge7hsNX8MxZSZFqPwBAgQy7XLebr3Knt1qCEqWKrmM5anVMY64P&Signature=XESKN%2FwBVsmIp1S05YqqnLUs3w8%3D​[1] 320163​​[2] 320164​​[root@iZ2ze0t8khaprrpfvmevjiZ ~]#​​Redirecting output to ‘wget-log’.​​[1]-  Exit 8                  wget ​​https://threemonth.oss-cn-beijing.aliyuncs.com/upload/%E9%82%93%E4%B8%BD%E5%90%9B%20-%20%E4%BD%86%E6%84%BF%E4%BA%BA%E9%95%BF%E4%B9%85.mp3?Expires=1656553431​​[2]+  Done                    OSSAccessKeyId=TMP.3KewYToct7R9i9kETqz4LjLRb59FNXi82t6DgBikZU7ge7hsNX8MxZSZFqPwBAgQy7XLebr3Knt1qCEqWKrmM5anVMY64P ​ 下载报错了,报错日志重定向到wget-log中,看一下这个日志的内容:[root@iZ2ze0t8khaprrpfvmevjiZ ~]# cat wget-log     ​ --2022-06-30 09:39:17--  ​​https://threemonth.oss-cn-beijing.aliyuncs.com/upload/%E9%82%93%E4%B8%BD%E5%90%9B%20-%20%E4%BD%86%E6%84%BF%E4%BA%BA%E9%95%BF%E4%B9%85.mp3?Expires=1656553431​​      Resolving threemonth.oss-cn-beijing.aliyuncs.com (threemonth.oss-cn-beijing.aliyuncs.com)... 59.110.190.48​​      Connecting to threemonth.oss-cn-beijing.aliyuncs.com (threemonth.oss-cn-beijing.aliyuncs.com)|59.110.190.48|:443... connected.​​      HTTP request sent, awaiting response... 403 Forbidden​​      2022-06-30 09:39:17 ERROR 403: Forbidden.​显示403错误,拒绝访问使用ossutil工具下载,将文件重命名一下[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ossutil64 cp oss://threemonth/upload/gdb.log ./gdb.log_download  --config-file=my_oss ​Succeed: Total num: 1, size: 1,925. OK num: 1(download 1 objects).​​average speed 13000(byte/s)​​0.138836(s) elapsed​显示一下文件[root@iZ2ze0t8khaprrpfvmevjiZ ~]# ls     ​ gdb.init                                    oracle-database-preinstall-21c-1.0-1.el8.x86_64.rp​​      gdb.log                                     percona-release-latest.noarch.rpm​​      gdb.log_download                            PolarDB-for-PostgreSQL​文件下载成功。

实验记录:PolarDB-X集群kill POD自动恢复实验

1 PolarDB-x集群创建galaxykube$ kubectl get polardbxCluster polardb-x -o wide -w  ​  NAME    PROTOCOL GMS   CN    DN    CDC   PHASE   DISK AGE​​   polardb-x   8.0  1/1   0/2   1/1   0/1   Creating   3m58s​​   polardb-x   8.0  1/1   0/2   1/1   1/1   Creating   5m6s​​   polardb-x   8.0  1/1   1/2   1/1   1/1   Creating   5m26s​​   polardb-x   8.0  1/1   2/2   1/1   1/1   Running  7.2 GiB       8.0.3-PXC-5.4.13-16534775/8.0.18   5m47s​​   polardb-x   8.0  1/1   2/2   1/1   1/1   Running 7.2 GiB        8.0.3-PXC-5.4.13-16534775/8.0.18   6m47s​集群有两个CN节点,一个GMS节点、一个CDC节点。集群的登录密码可以用一下命令获得:galaxykube@i$ kubectl get secret polardb-x -o jsonpath="{.data['polardbx_root']}" | base -d - | xargs echo "Password: "    Password:  8zpmncf5 可以看到,登录集群的密码存在k8s集群的资源secret ploardb-x中。在创建集群的同时,PolarDB-X operator也创建了一个polardb-x服务,详细信息如下(kubectl get pod命令的-o wide 选项用来显示资源的详细信息):galaxykube$ kubectl get svc polardb-x -o wide​NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE   SELECTOR​​polardb-x   ClusterIP   10.106.6.253   <none>     3306/TCP,8081/TCP   30m   polardbx/cn-type=rw,polardbx/name=polardb-x,polardbx/rand=4k6z,polardbx/role=c​ 需要将此服务转发至本地3306端口才能使用本地客户端访问PolarDB-X集群,转发后不能退出命令,需要保持执行命令的会话处于open状态。galaxykube$kubectl port-forward svc/polardb-x 330612 连接至PolarDB-X集群重新打开一个会话,连接至PolarDB-X所在集群服务器,在root用户下操作即可,服务器上需要提前安装号MySQL客户端,使用MySQL客户端即可登录到PolarDB-X集群,用户名是polardbx_root,密码用上节获得的,地址使用本地环回地址,端口为转发端口3306root# mysql -h127.0.0.1 -P3306 -upolardbx_root -p8zpmncf5​mysql: [Warning] Using a password on the command line interface can be insecure.​​      Welcome to the MySQL monitor.  Commands end with ; or \g.​​      ........​​      Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.​3 使用sysbench模拟一个业务环境创建sysbench所需数据库,数据库名字为sysbench_testmysql> create database sysbench_test;退出会话后登录galaxykube用户. 使用su命令时加上 - ,切换至galaxykube用户环境su - galaxykube下载sysbench-prepare.yaml(sysbench准备数据pod),sysbench-oltp.yaml(sysbench模拟业务pod文件)wget https://labfileapp.oss-cn-hangzhou.aliyuncs.com/learn-some-polardb-x-main-class-5/sysbench-prepare.yamlwget https://labfileapp.oss-cn-hangzhou.aliyuncs.com/learn-some-polardb-x-main-class-5/sysbench-oltp.yaml主备sysbench数据,创建sysbench-prepare jobgalaxykube$ kubectl apply -f sysbench-prepare.yaml检查job状态,galaxykube@iZbp12b91re208jg1y8likZ ~]$ kubectl get jobs    NAME                         COMPLETIONS   DURATION   AGE    sysbench-prepare-data-test   0/1           34s        34s当COMPLETIONS为1/1时,sysben数据已经准备好,可以开始创建和启动模拟业务pod了galaxykube$ kubectl apply -f sysbench-oltp.yaml查看sysbench-oltp pod名称galaxykube$ kubectl get pods​NAME                             READY   STATUS      RESTARTS      AGE​​    polardb-x-rz2l-cdc-default-579b4b7549-qdrk8   2/2     Running     2 (10m ago)   9d​​ ........​    polardb-x-rz2l-gms-log-0                      3/3     Running     3 (10m ago)   9d​​    sysbench-oltp-test-d5hhs                      1/1     Running     0             27s​​    sysbench-prepare-data-test-btt7g              0/1     Completed   0             2m8s​从上面可以看到,业务模拟pod的名称是sysbench-oltp-test-d5hhs,持续监控这个pod的日志输出galaxykube$ kubectl logs -f sysbench-oltp-test-d5hhs ​   WARNING: --num-threads is deprecated, use --threads instead​​    sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2)​​    Running the test with following options:​​    Number of threads: 8​​    Report intermediate results every 5 second(s)​​    Initializing random number generator from current time​​    Initializing worker threads...​​    Threads started!​​    [ 5s ] thds: 8 tps: 40.45 qps: 745.28 (r/w/o: 582.90/162.38/0.00) lat (ms,95%): 287.38 err/s: 0.00 reconn/s: 0.00​​    [ 10s ] thds: 8 tps: 49.35 qps: 887.70 (r/w/o: 690.09/197.60/0.00) lat (ms,95%): 314.45 err/s: 0.00 reconn/s: 0.00​​    [ 15s ] thds: 8 tps: 62.68 qps: 1135.51 (r/w/o: 882.38/253.13/0.00) lat (ms,95%): 186.54 err/s: 0.00 reconn/s: 0.00​PolarDB-X集群目前的tps为62.68,qps为1135.31。4 删除一个pod,观察PolarDB-X集群状态及业务负载的变化[galaxykube@iZbp16rl52hr6rd8xo8rwyZ ~]$ kubectl get pods  ​  NAME                                   READY   STATUS      RESTARTS      AGE​​    polardb-x-t6jd-cdc-default-5c9c46bcbd-svhvb   2/2     Running     0             14m​​    polardb-x-t6jd-cn-default-b84868666-w6sbm     3/3     Running     0             14m​​    polardb-x-t6jd-cn-default-b84868666-wzbdg     3/3     Running     1 (13m ago)   14m​    .....集群内目前有两个cn pod,标红的字符串为这两个cn的名称,删除第二个podgalaxykube$ kubectl delete pod polardb-x-t6jd-cn-default-b84868666-wzbdg​pod "polardb-x-t6jd-cn-default-b84868666-wzbdg" deleted​检查集群中pod的状态galaxykube$ kubectl get podsNAME                                          READY   STATUS      RESTARTS   AGEpolardb-x-t6jd-cdc-default-5c9c46bcbd-svhvb   2/2     Running     0          20mpolardb-x-t6jd-cn-default-b84868666-r6m7j     2/3     Running     0          13spolardb-x-t6jd-cn-default-b84868666-w6sbm     3/3     Running     0          20m集群重新创建了一个pod,名称同删除cn pod的名称不同,该pod正处于启动状态启动完毕后,pod的ready为3/3[galaxykube@iZbp16rl52hr6rd8xo8rwyZ ~]$ kubectl get pods​NAME                                          READY   STATUS      RESTARTS   AGE​​polardb-x-t6jd-cdc-default-5c9c46bcbd-svhvb   2/2     Running     0          21m​​polardb-x-t6jd-cn-default-b84868666-r6m7j     3/3     Running     0          49s​​polardb-x-t6jd-cn-default-b84868666-w6sbm     3/3     Running     0          21m​​....​查看一下业务负载pod的日志,观察业务负载的变化业务负载在删除cn pod、重建cn pod的时间段内有所降低,在cn pod重建完成后业务负载恢复到删除之前的数值,这个期间errs为0,有一个时间间隔内有重连,显示了在删除节点(pod)运行的业务重新连接到了正常的节点(pod),可以看出来,PolarDB-X集群cn节点的删除和重建不会影响到业务运行,可以在线执行。

实验记录:如何将 PolarDB-X 与大数据等系统互通

1 安装docker及PolarDB-X(略)2 安装canal server下载安装脚本[root@iZbp13eg5pfeabf4cmf92eZ ~]# wget https://raw.githubusercontent.com/alibaba/canal/master/docker/run.sh ​     --2022-06-16 09:05:59--  ​​https://raw.githubusercontent.com/alibaba/canal/master/docker/run.sh​​      Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...​​      Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.​​      HTTP request sent, awaiting response... 200 OK​​      Length: 2551 (2.5K) [text/plain]​​      Saving to: ‘run.sh’​​      run.sh                       100%[==============================================>]   2.49K  --.-KB/s    in 0s      ​​      2022-06-16 09:05:59 (50.7 MB/s) - ‘run.sh’ saved [2551/2551]​运行安装脚本,实例的master地址为源端数据库的地址和端口,这里填入申请到的ECS的弹性地址这个脚本会自动下载相应的docker映像,并启动容器[root@iZbp13eg5pfeabf4cmf92eZ ~]# sh run.sh -e canal.auto.scan=false \> -e canal.destinations=test \> -e canal.instance.master.address=114.55.7.239:8527 \> -e canal.instance.dbUsername=polardbx_root \> -e canal.instance.dbPassword=123456 \> -e canal.instance.connectionCharset=UTF-8 \> -e canal.instance.tsdb.enable=true \> -e canal.instance.gtidon=false   docker run -d -it -h 0 -e canal.auto.scan=false -e canal.destinations=test -e canal.instance.master.address=114.55.7.239:8527 -e canal.instance.dbUsername=polardbx_root -e canal.instance.dbPassword=123456 -e canal.instance.connectionCharset=UTF-8 -e canal.instance.tsdb.enable=true -e canal.instance.gtidon=false --name=canal-server --net=host -m 4096m canal/canal-server​​    Unable to find image 'canal/canal-server:latest' locally​​    latest: Pulling from canal/canal-server​​    1c8f9aa56c90: Pull complete  ​​    c5e21c824d1c: Pull complete  ​​    4ba7edb60123: Pull complete  ​​    80d8e8fac1be: Pull complete  ​​    bce514860fc9: Pull complete  ​​    0b8a43c81049: Pull complete  ​​    a81188309a68: Pull complete  ​​    4f4fb700ef54: Pull complete  ​​    Digest: sha256:a5e93c0a1e452cdf17f4278ba0f5e7c902ee561385c264826ccd79797f2f872f​​    Status: Downloaded newer image for canal/canal-server:latest​​    c8c8c86b3653c6be0475f94a8179ff27df2cda2d221ac0ceaa471fc849bfdf7b​3 安装clickhouseclickhouse以docker容器方式运行[root@iZbp13eg5pfeabf4cmf92eZ ~]# docker run -d --name some-clickhouse-server --ulimit nofile=262144:262144 -p 8123:8123 yandex/clickhouse-server  ​  Unable to find image 'yandex/clickhouse-server:latest' locally​​    latest: Pulling from yandex/clickhouse-server​​    ea362f368469: Pull complete  ​​    38ba82a23e2b: Pull complete  ​​    9b17d04b6c62: Pull complete  ​​    5658714e4e8b: Pull complete  ​​    6bde977a0bf8: Pull complete  ​​    39053b27290b: Pull complete  ​​    762d3d237065: Pull complete  ​​    Digest: sha256:1cbf75aabe1e2cc9f62d1d9929c318a59ae552e2700e201db985b92a9bcabc6e​​    Status: Downloaded newer image for yandex/clickhouse-server:latest​​    4116fb78b90469b1c08675dc62b2b24337ecf0d47dde313d76db0a0e850f8c80​4 在PolarDB-X和clickhouse中创建数据库和表在mysql中创建表[root@iZbp13eg5pfeabf4cmf92eZ ~]# mysql -h127.0.0.1 -P8527 -upolardbx_root -p123456    ​mysql: [Warning] Using a password on the command line interface can be insecure.​​    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.​   mysql> CREATE DATABASE testdb;    ​Query OK, 1 row affected (0.55 sec)​   mysql> use testdb;    ​Database changed​mysql> CREATE TABLE test(        -> id INT(11) AUTO_INCREMENT PRIMARY KEY,        -> name CHAR(20) not null );   ​ Query OK, 0 rows affected (0.97 sec)​   mysql> exit ​   Bye​在clickhouse中创建表,先启动一个clickhouse客户端容器连接至clickhouse数据库[root@iZbp13eg5pfeabf4cmf92eZ ~]# docker run -it --rm --link some-clickhouse-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server ​   Unable to find image 'yandex/clickhouse-client:latest' locally​​    latest: Pulling from yandex/clickhouse-client​​    2f94e549220a: Pull complete  ​​    a72d8599d7c2: Pull complete  ​​    e9232762ed9d: Pull complete  ​​    29c8f4b1e77e: Pull complete  ​​    Digest: sha256:9ae2ee421c9c9f00406a39a1174276aa23abb7fceac13b40578b18eeaa9bc4d1​​    Status: Downloaded newer image for yandex/clickhouse-client:latest​​    ClickHouse client version 22.1.3.7 (official build).​​    Connecting to clickhouse-server:9000 as user default.​​    Connected to ClickHouse server version 22.1.3 revision 54455.​​4116fb78b904 :) CREATE DATABASE testdb;​   ​    CREATE DATABASE testdb​​   ​​    Query id: ad9ff6a8-59ce-4d9c-a537-70ad040fa3ca​​   ​​    Ok.​       ​0 rows in set. Elapsed: 0.003 sec.  ​       4116fb78b904 :) USE testdb;   ​    USE testdb​​   ​​    Query id: 4fa7e43c-5718-4444-8deb-2768096505cd​​   ​​    Ok.​​   ​​    0 rows in set. Elapsed: 0.001 sec.  ​   4116fb78b904 :) Create Table test(id INT(32),name CHAR(20)) Engine = MergeTree() Order By id;   ​    CREATE TABLE test​​    (​​        `id` INT(32),​​        `name` CHAR(20)​​    )​​    ENGINE = MergeTree​​    ORDER BY id​​   ​​    Query id: d1e6269e-8db5-4cfa-bb0c-ae281aa24431​​   ​​    Ok.​​   ​​    0 rows in set. Elapsed: 0.010 sec.​4116fb78b904 :) exit  ​  Bye.​5 运行Canal Client消费并投递增量变更安装java-1.8.0[root@iZbp13eg5pfeabf4cmf92eZ ~]# yum -y install java-1.8.0-openjdk*  ​  Complete!​下载Canal polardb-x-clickhouse客户端[root@iZbp13eg5pfeabf4cmf92eZ ~]# wget https://labfileapp.oss-cn-hangzhou.aliyuncs.com/polardb-x-to-clickhouse-canal-client.jar  ​  --2022-06-16 09:21:27--  ​​https://labfileapp.oss-cn-hangzhou.aliyuncs.com/polardb-x-to-clickhouse-canal-client.jar​​    Resolving labfileapp.oss-cn-hangzhou.aliyuncs.com (labfileapp.oss-cn-hangzhou.aliyuncs.com)... 118.31.219.220​​    Connecting to labfileapp.oss-cn-hangzhou.aliyuncs.com (labfileapp.oss-cn-hangzhou.aliyuncs.com)|118.31.219.220|:443... connected.​​    HTTP request sent, awaiting response... 200 OK​​    Length: 43089777 (41M) [application/java-archive]​​    Saving to: ‘polardb-x-to-clickhouse-canal-client.jar’​​   ​​    polardb-x-to-clickhouse-cana 100%[==============================================>]  41.09M  9.15MB/s    in 4.5s    ​​   ​​    2022-06-16 09:21:32 (9.15 MB/s) - ‘polardb-x-to-clickhouse-canal-client.jar’ saved [43089777/43089777]​运行客户端,客户端不能关闭[root@iZbp13eg5pfeabf4cmf92eZ ~]# java -jar polardb-x-to-clickhouse-canal-client.jar  ​  09:22:29.751 [main] DEBUG com.clickhouse.jdbc.ClickHouseDriver - ClickHouse Driver 0.0.0.0(JDBC: 0.0.0.0) registered​​    09:22:29.773 [main] DEBUG c.c.j.i.ClickHouseConnectionImpl - Creating a new connection to jdbc:clickhouse:http://localhost:8123/testdb​​    empty count : 1​  6 检查数据投递效果连接至polardbx,插入几行数据[root@iZbp13eg5pfeabf4cmf92eZ ~]# mysql -h127.0.0.1 -P8527 -upolardbx_root -p123456mysql> use testdb;插入三行数据mysql> INSERT INTO test(name) values("polardb-x"), ("is"), ("awsome");   ​ Query OK, 3 rows affected (0.03 sec)​   连接至clickhouse查看数据    [root@iZbp13eg5pfeabf4cmf92eZ ~]# docker run -it --rm --link some-clickhouse-server:clickhouse-server yandex/clickhouse-client --host clickhouse-server4116fb78b904 :) use testdb;    4116fb78b904 :) SELECT * FROM test;       SELECT *    FROM test      ​ Query id: dc1c99e8-8bc7-4eee-91b8-d45103fc898a​​   ​​    ​┌─id─┬─name──────┐​​​​    │  1 │ polardb-x │​​​​    └────┴───────────┘​​​​    ┌─id─┬─name─┐​​​​    │  2 │ is   │​​​​    └────┴──────┘​​​​    ┌─id─┬─name───┐​​​​    │  3 │ awsome │​​​​    └────┴────────┘ ​​  可以看到数据已经投递到clickhouse中 7 canalserver状态检查通过检查日志来检查calnalserver和实例的状态,日志的路径位于 /home/admin/canal-server/logs/检查canal目录下canal.log文件可以查看canal-server的状态[root@0 canal]# cat canal.log ​   2022-06-16 09:08:48.728 [main] INFO  com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler​​    2022-06-16 09:08:48.746 [main] INFO  com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations​​    2022-06-16 09:08:48.758 [main] INFO  com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server.​​    2022-06-16 09:08:48.812 [main] INFO  com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[172.17.0.1(172.17.0.1):11111]​​    2022-06-16 09:08:50.137 [main] INFO  com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ​......检查test目录下test.log可以查看实例的状态,这里test时实例名[root@0 test]# cat test.log   ​ 2022-06-16 09:08:50.064 [main] INFO  c.a.otter.canal.instance.spring.CanalInstanceWithSpring - start CannalInstance for 1-test  ​​    2022-06-16 09:08:50.083 [main] WARN  c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^.*\..*$​​    2022-06-16 09:08:50.083 [main] WARN  c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table black filter : ^mysql\.slave_.*$​​    2022-06-16 09:08:50.089 [main] INFO  c.a.otter.canal.instance.core.AbstractCanalInstance - start successful....​​    2022-06-16 09:08:50.285 [destination = test , address = /114.55.7.239:8527 , EventParser] WARN  c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> begin to find start position, it will be long time for reset or first position​​    2022-06-16 09:08:50.285 [destination = test , address = /114.55.7.239:8527 , EventParser] WARN  c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - prepare to find start position just show master status​​    2022-06-16 09:08:52.780 [destination = test , address = /114.55.7.239:8527 , EventParser] WARN  c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> find start position successfully, EntryPosition[included=false,journalName=binlog.000001,position=4,serverId=193317851,gtid=<null>,timestamp=1655341507000] cost : 2485ms , the next step is binlog dump​​    2022-06-16 09:22:29.597 [New I/O server worker #1-1] INFO  c.a.otter.canal.instance.core.AbstractCanalInstance - subscribe filter change to .*\..*​​    2022-06-16 09:22:29.597 [New I/O server worker #1-1] WARN  c.a.o.canal.parse.inbound.mysql.dbsync.LogEventConvert - --> init table filter : ^.*\..*$​ 8 通过canal client的控制台输出查看投递到clickhouse的数据    09:24:35.559 [main] DEBUG c.c.c.data.ClickHouseStreamResponse - 0 bytes skipped before closing input stream    ================> binlog[binlog.000001:16765] , name[,] , eventType : QUERY    ================> binlog[binlog.000001:16883] , name[testdb,test] , eventType : INSERT    id : 1    update=true    name : polardb-x    update=true    09:24:35.590 [main] DEBUG com.clickhouse.client.AbstractClient - Connecting to: ClickHouseNode(addr=http:localhost:8123, db=testdb)@1339874339    09:24:35.590 [main] DEBUG com.clickhouse.client.AbstractClient - Connection established: com.clickhouse.client.http.HttpUrlConnectionImpl@3b084709    09:24:35.590 [main] DEBUG c.c.client.http.ClickHouseHttpClient - Query: INSERT INTO test(id, name) values(1,'polardb-x')    09:24:35.611 [main] DEBUG c.c.c.data.ClickHouseStreamResponse - 0 bytes skipped before closing input stream    SQL done: INSERT INTO test(id, name) values(1,'polardb-x')    id : 2    update=true    name : is    update=true    09:24:35.612 [main] DEBUG com.clickhouse.client.AbstractClient - Connecting to: ClickHouseNode(addr=http:localhost:8123, db=testdb)@1339874339    09:24:35.612 [main] DEBUG com.clickhouse.client.AbstractClient - Connection established: com.clickhouse.client.http.HttpUrlConnectionImpl@59e5ddf    09:24:35.612 [main] DEBUG c.c.client.http.ClickHouseHttpClient - Query: INSERT INTO test(id, name) values(2,'is')    09:24:35.616 [main] DEBUG c.c.c.data.ClickHouseStreamResponse - 0 bytes skipped before closing input stream    SQL done: INSERT INTO test(id, name) values(2,'is')    id : 3    update=true    name : awsome    update=true    09:24:35.617 [main] DEBUG com.clickhouse.client.AbstractClient - Connecting to: ClickHouseNode(addr=http:localhost:8123, db=testdb)@1339874339    09:24:35.618 [main] DEBUG com.clickhouse.client.AbstractClient - Connection established: com.clickhouse.client.http.HttpUrlConnectionImpl@536aaa8d    09:24:35.618 [main] DEBUG c.c.client.http.ClickHouseHttpClient - Query: INSERT INTO test(id, name) values(3,'awsome')    09:24:35.625 [main] DEBUG c.c.c.data.ClickHouseStreamResponse - 0 bytes skipped before closing input stream 在canal client的控制台上可以看到投递到clickhouse的sql语句,从上面标红的部分可以看到三条记录分别执行了三条insert语句。

实践记录:使用minikube部署PolarDB-X集群

OGG重新同步表

在Oracle golden gate中,偶尔会遇到表数据不同步的情况,遇到这种情况,首先是检查表数据库不同步的原因,常见的如网络问题,表结构改变等,查找到不同步的原因,解决方案的最后一步往往是要重新同步数据,大多数情况下只要保证表同步需要的归档日志在,重启抽取和复制进程即可,但有时候部分表重新同步的归档日志可能被删除了,这个时候只需要重新同步这部分表的数据,这个操作可以在线进行。  1 在最初的复制参数中注释掉要重新同步数据的表。  2  停止最初的复制组,并重新启动,如复制组名为repliA      stop replicat repliA;     start replicat repliA  这样注释掉的表就不会再复制。3 记录下源数据库的时间戳。4 解决不同步表的长事务。5 用export工具导出不同步的表6 复制到目标系统并导入。7 创建一个新复制组,使用第三部记录的时间戳作为启动参数,使用现在的extracttrail如:ADD REPLICAT , EXTTRAIL ,  BEGIN 8 创建新复制组的参数文件,包含处理冲突选项:EDIT PARAMS repliB9 启动新复制组start replica repiB10 检查新复制组的延迟  send replica repliB ,getlag显示‘AT EOF,表明没有数据要处理,数据已经同步。11 关闭新复制组。stop replica repliB12  编辑新复制组的参数文件,注释掉冲突处理选项后启动新复制组。13 停止EXTRACT14 检查两个复制组的lag,直到显示数据已经同步。15 编辑初始复制组的参数文件,解除已经同步的表的注释。17 启动EXTRACT进程,启动成功后再启动初始复制进程。18 删除新建的复制组delete replicat repliB。

Oracle rman备份保留策略,归档删除策略及delete命令的使用

今日学习Oracle rman备份保留策略,归档删除策略,发现delete obsolete命令不考虑归档删除策略,只遵从备份保留策略,delete archive log 只遵从归档删除策略,不考虑备份保留策略,实验验证一下。1 数据库环境1.1 数据库版本SQL> select * from v$version;BANNEROracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit ProductionPL/SQL Release 11.2.0.4.0 - ProductionCORE 11.2.0.4.0 ProductionTNS for Linux: Version 11.2.0.4.0 - ProductionNLSRTL Version 11.2.0.4.0 - Production1.2 数据库日志模式SQL> archive log list;Database log mode Archive ModeAutomatic archival EnabledArchive destination USE_DB_RECOVERY_FILE_DESTOldest online log sequence 1Next log sequence to archive 3Current log sequence 31.3 rman参数配置RMAN> show all;using target database control file instead of recovery catalogRMAN configuration parameters for database with db_unique_name ORCL11G are:CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # defaultCONFIGURE BACKUP OPTIMIZATION OFF; # defaultCONFIGURE DEFAULT DEVICE TYPE TO DISK; # defaultCONFIGURE CONTROLFILE AUTOBACKUP OFF; # defaultCONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # defaultCONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # defaultCONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # defaultCONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # defaultCONFIGURE MAXSETSIZE TO UNLIMITED; # defaultCONFIGURE ENCRYPTION FOR DATABASE OFF; # defaultCONFIGURE ENCRYPTION ALGORITHM 'AES128'; # defaultCONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # defaultCONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # defaultCONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/11.2.0/db_1/dbs/snapcf_orcl11g.f'; # default2 设置备份保留策略及归档删除策略2.1 保留策略采用默认设置,保留一个有效备份。2.2 设置归档删除策略为备份到磁盘设备两次RMAN> configure archivelog deletion policy to backed up 2 times to device type disk;old RMAN configuration parameters:CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;new RMAN configuration parameters:CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK;new RMAN configuration parameters are successfully stored3 验证delete obsolete命令3.1 备份归档日志RMAN> backup archivelog all;3.2 检查以下废弃的备份及文件RMAN> report obsolete;RMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1Report of obsolete backups and copiesType Key Completion Time Filename/HandleBackup Set 1 03-AUG-19 Backup Piece 1 03-AUG-19 /home/oracle/backup/c_ORCL11G_20190803_1Backup Set 2 03-AUG-19 Backup Piece 2 03-AUG-19 /home/oracle/backup/ctl_04u899pu_20190803_4;上面两个备份时以前的控制文件备份3.3 切换日志,产生几个归档RMAN> sql 'alter system switch logfile';sql statement: alter system switch logfile3.4 做一个数据库全备RMAN> back up database;3.5 检查以下废弃的文件RMAN> report obsolete;RMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1Report of obsolete backups and copiesType Key Completion Time Filename/HandleArchive Log 3 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_5_gl9cvbtm_.arcArchive Log 1 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_8_gmqbfowj_.arcArchive Log 2 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqbjrmd_.arcArchive Log 5 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_7_gl9df4p5_.arcArchive Log 6 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_4_gl9clfkz_.arcArchive Log 4 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_6_gl9cxkm7_.arcArchive Log 7 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_10_gmqgf8cl_.arcArchive Log 8 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_8_gmqgf8jr_.arcArchive Log 9 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqgf905_.arcBackup Set 1 03-AUG-19 Backup Piece 1 03-AUG-19 /home/oracle/backup/c_ORCL11G_20190803_1Backup Set 2 03-AUG-19 Backup Piece 2 03-AUG-19 /home/oracle/backup/ctl_04u899pu_20190803_4;Archive Log 13 03-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_03/o1_mf_1_1_gnb29x1t_.arcBackup Set 3 03-AUG-19 Backup Piece 3 03-AUG-19 /home/oracle/backup/c_ORCL11G_%l_20190803_6Archive Log 14 04-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_04/o1_mf_1_1_gnfccmyr_.arcArchive Log 15 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_2_gnvx3c6q_.arcArchive Log 16 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_3_gnvxwrj5_.arcBackup Set 4 10-AUG-19 Backup Piece 4 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxws3s_.bkpBackup Set 5 10-AUG-19 Backup Piece 5 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxwt66_.bkpArchive Log 17 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_4_gnvy7rm9_.arcBackup Set 6 10-AUG-19 Backup Piece 6 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxwv8t_.bkpArchive Log 18 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_5_gnvy9x45_.arcArchive Log 19 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_6_gnvyb315_.arc从输出来看,数据库全备之前产生的归档日志及其备份都被标识为废弃文件3.6 删除废弃的文件RMAN> delete noprompt obsolete;RMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1using channel ORA_DISK_1Deleting the following obsolete backups and copies:Type Key Completion Time Filename/HandleArchive Log 3 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_5_gl9cvbtm_.arcArchive Log 1 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_8_gmqbfowj_.arcArchive Log 2 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqbjrmd_.arcArchive Log 5 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_7_gl9df4p5_.arcArchive Log 6 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_4_gl9clfkz_.arcArchive Log 4 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_6_gl9cxkm7_.arcArchive Log 7 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_10_gmqgf8cl_.arcArchive Log 8 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_8_gmqgf8jr_.arcArchive Log 9 27-JUL-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqgf905_.arcBackup Set 1 03-AUG-19 Backup Piece 1 03-AUG-19 /home/oracle/backup/c_ORCL11G_20190803_1Backup Set 2 03-AUG-19 Backup Piece 2 03-AUG-19 /home/oracle/backup/ctl_04u899pu_20190803_4;Archive Log 13 03-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_03/o1_mf_1_1_gnb29x1t_.arcBackup Set 3 03-AUG-19 Backup Piece 3 03-AUG-19 /home/oracle/backup/c_ORCL11G_%l_20190803_6Archive Log 14 04-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_04/o1_mf_1_1_gnfccmyr_.arcArchive Log 15 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_2_gnvx3c6q_.arcArchive Log 16 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_3_gnvxwrj5_.arcBackup Set 4 10-AUG-19 Backup Piece 4 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxws3s_.bkpBackup Set 5 10-AUG-19 Backup Piece 5 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxwt66_.bkpArchive Log 17 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_4_gnvy7rm9_.arcBackup Set 6 10-AUG-19 Backup Piece 6 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxwv8t_.bkpArchive Log 18 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_5_gnvy9x45_.arcArchive Log 19 10-AUG-19 /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_6_gnvyb315_.arcdeleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_5_gl9cvbtm_.arc RECID=3 STAMP=1014719046deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_8_gmqbfowj_.arc RECID=1 STAMP=1014719046deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqbjrmd_.arc RECID=2 STAMP=1014719046deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_7_gl9df4p5_.arc RECID=5 STAMP=1014719047deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_4_gl9clfkz_.arc RECID=6 STAMP=1014719047deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_09/o1_mf_1_6_gl9cxkm7_.arc RECID=4 STAMP=1014719047deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_10_gmqgf8cl_.arc RECID=7 STAMP=1014720040deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_8_gmqgf8jr_.arc RECID=8 STAMP=1014720040deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_07_27/o1_mf_1_9_gmqgf905_.arc RECID=9 STAMP=1014720041deleted backup piecebackup piece handle=/home/oracle/backup/c_ORCL11G_20190803_1 RECID=1 STAMP=1015326317deleted backup piecebackup piece handle=/home/oracle/backup/ctl_04u899pu_20190803_4; RECID=2 STAMP=1015326527deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_03/o1_mf_1_1_gnb29x1t_.arc RECID=13 STAMP=1015330237deleted backup piecebackup piece handle=/home/oracle/backup/c_ORCL11G_%l_20190803_6 RECID=3 STAMP=1015326746deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_04/o1_mf_1_1_gnfccmyr_.arc RECID=14 STAMP=1015437812deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_2_gnvx3c6q_.arc RECID=15 STAMP=1015914731deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_3_gnvxwrj5_.arc RECID=16 STAMP=1015915544deleted backup piecebackup piece handle=/u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxws3s_.bkp RECID=4 STAMP=1015915545deleted backup piecebackup piece handle=/u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxwt66_.bkp RECID=5 STAMP=1015915546deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_4_gnvy7rm9_.arc RECID=17 STAMP=1015915896deleted backup piecebackup piece handle=/u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_annnn_TAG20190810T064544_gnvxwv8t_.bkp RECID=6 STAMP=1015915547deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_5_gnvy9x45_.arc RECID=18 STAMP=1015915965deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_6_gnvyb315_.arc RECID=19 STAMP=1015915971Deleted 22 objects数据库备份之前产生的备份集及归档日志都被删除了,检查以下现有备份文件及归档日志,没有归档日志,备份集只剩一下数据库全备的:RMAN> list backup;List of Backup SetsBS Key Type LV Size Device Type Elapsed Time Completion Time7 Full 1.03G DISK 00:00:27 10-AUG-19 BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20190810T065316 Piece Name: /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_nnndf_TAG20190810T065316_gnvybw9j_.bkpList of Datafiles in backup set 7 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---- 1 Full 1028593 10-AUG-19 /u01/app/oracle/oradata/orcl11g/system01.dbf 2 Full 1028593 10-AUG-19 /u01/app/oracle/oradata/orcl11g/sysaux01.dbf 3 Full 1028593 10-AUG-19 /u01/app/oracle/oradata/orcl11g/undotbs01.dbf 4 Full 1028593 10-AUG-19 /u01/app/oracle/oradata/orcl11g/users01.dbfBS Key Type LV Size Device Type Elapsed Time Completion Time8 Full 9.67M DISK 00:00:01 10-AUG-19 BP Key: 8 Status: AVAILABLE Compressed: NO Tag: TAG20190810T065316 Piece Name: /u01/app/oracle/fast_recovery_area/ORCL11G/backupset/2019_08_10/o1_mf_ncsnf_TAG20190810T065316_gnvyd0dg_.bkpSPFILE Included: Modification time: 10-AUG-19 SPFILE db_unique_name: ORCL11G Control File Included: Ckp SCN: 1028605 Ckp time: 10-AUG-19RMAN> list archivelog all;specification does not match any archived log in the repository4 验证delete archivelog命令4.1 更改归档删除策略为None,归档删除策略为未经备份的归档日志也可以删除。RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO None;old RMAN configuration parameters:CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DISK;new RMAN configuration parameters:CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;new RMAN configuration parameters are successfully stored4.2 产生几个归档日志RMAN> sql 'alter system switch logfile';sql statement: alter system switch logfileRMAN> sql 'alter system switch logfile';sql statement: alter system switch logfileRMAN> sql 'alter system switch logfile';sql statement: alter system switch logfileRMAN>RMAN> list archivelog all;List of Archived Log Copies for database with db_unique_name ORCL11GKey Thrd Seq S Low Time20 1 7 A 10-AUG-19 Name: /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_7_gnvzpxgh_.arc 21 1 8 A 10-AUG-19 Name: /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_8_gnvzq4ch_.arc 22 1 9 A 10-AUG-19 Name: /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_9_gnvzq9jc_.arc 4.3 系统中没有obsolete文件RMAN> report obsolete;RMAN retention policy will be applied to the commandRMAN retention policy is set to redundancy 1no obsolete backups found4.4 删除归档日志RMAN> delete noprompt archivelog all;released channel: ORA_DISK_1allocated channel: ORA_DISK_1channel ORA_DISK_1: SID=1 device type=DISKList of Archived Log Copies for database with db_unique_name ORCL11GKey Thrd Seq S Low Time20 1 7 A 10-AUG-19 Name: /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_7_gnvzpxgh_.arc 21 1 8 A 10-AUG-19 Name: /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_8_gnvzq4ch_.arc 22 1 9 A 10-AUG-19 Name: /u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_9_gnvzq9jc_.arc deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_7_gnvzpxgh_.arc RECID=20 STAMP=1015917405deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_8_gnvzq4ch_.arc RECID=21 STAMP=1015917412deleted archived logarchived log file name=/u01/app/oracle/fast_recovery_area/ORCL11G/archivelog/2019_08_10/o1_mf_1_9_gnvzq9jc_.arc RECID=22 STAMP=1015917417Deleted 3 objectsRMAN>删除了刚才产生的归档日志,这些归档日志是当前备份保留策略所需要的,也没有备份。可见,delete archivelog命令并不考虑数据库备份策略。5 结论从上面的实验上来看,delete archivelog命令只遵循归档日志删除策略,delete obsolete命令只遵循备份保留策略,因此,在使用时要注意二者的相互影响,即delete obsolete 命令可以破坏归档日志删除策略,delete archivelog 可以破坏备份保留策略,在使用脚本自动备份时,要仔细分析和验证,避免造成意外的结果,而在dataguard环境尤其要谨慎使用delete obsolete 命令,避免删除未传输的日志。

使用DBeaver连接和管理RDS

1 配置RDS账号和公网地址白名单1)设置白名单进入RDS实例管理页面,点击数据安全性,修改白名单分组,加入DBeaver所在电脑的IP地址,我这里是手机wifi,不能确认公网地址,加入0.0.0.0/0,允许所有地址接入。后面连接成功后可以获取自己ip地址再调整白名单。2) 设置数据库访问账号使用test账号。2 设置DBeaver对RDS的连接选择新建数据库连接,数据库类型选择MySQL从RDS控制台复制外网地址填入地址,用户名和密码等信息,点击测试连接,我第一次连接时报下面这个错误,这里需要改一下驱动属性,设置allow public key retrieve 为true找到属性项改为true,点击测试连接连接测试成功,RDS状态变为已连接。3 更改RDS白名单配置1) 右击数据库导航栏内的RDS实例,打开一个sql窗口,运行下面命令这里可以看到本地ip地址,将其加入到RDS白名单,删除0.0.0.0/04 使用DBeaver在RDS上开发和管理DBeaver提供了一下运维和开发比较方便的功能,比如查看ER实体关系查看数据库仪表盘

PolarDB-X性能优化之执行计划基础

1 执行计划基本概念 使用PolarDB-X时,应用程序的查询SQL(称为逻辑SQL)发往PolarDB-X计算节点(CN),PolarDB-X会将其分成可下推的、和不可下推的两部分,可下推的部分也被称为物理SQL。不可下推的SQL在CN上执行,下推的SQL在DN上执行。 PolarDB-X在查询优化过程中尽可能将用户SQL下推到DN上执行,避免CN和DN间数据网络交互以外,同时充分利用多分片并发执行的能力,利用各个DN资源,加速查询。可以下发给DN的算子包括LogicalView,LogicalModifyView,PhyTableOperation, IndexScan,对于语句其它无法下推的部分,优化器会选择最优的方式来执行,比如选择合适的算子执行、选择合适的并行度策略以及是否使用 MPP 执行。 对于PolarDB-X的执行计划,下面的基础概念需要了解 a) 逻辑SQL: 用户侧发起的查询SQL; b) 物理SQL:SQL经过查询优化后,一般会拆分为可下推和不可下推的SQL,其中可下推的SQL是发往DN执行的,叫物理SQL。如果逻辑SQL被全部下推到DN执行,那么物理SQL等价于逻辑SQL。 PolarDB-X的explain命令提供了比较多的选项,可以从各个不同的角度查看和分析执行计划。2 实例sql语句MySQL [test]> select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b                 where a.dept_no=b.dept_no  and b.dept_no=30                    group by a.dept_no, b.dept_name;+---------+--------------+---------------+| dept_no | dept_name    | sum(a.salary) |+---------+--------------+---------------+|      30 | 进出口部     |          1375 |+---------+--------------+---------------+1 row in set (0.09 sec)本文中使用上述sql语句示范从不同角度使用explain的不同选项查看执行计划。3 查看逻辑执行计划MySQL [test]> explain select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b                      ->  where a.dept_no=b.dept_no  and b.dept_no=30                      ->   group by a.dept_no, b.dept_name;      +-----------------------------------------------------------------------------------+      | LOGICAL EXECUTIONPLAN   |      +-----------------------------------------------------------------------------------+      | HashAgg(group="dept_no,dept_name", sum(a.salary)="SUM(salary)") |      |   Project(dept_no="dept_no0", salary="salary", dept_no0="dept_no", dept_name="dept_name")                                                                     |      |     BKAJoin(condition="dept_no = dept_no", type="inner")    |      |       LogicalView(tables="dept[p11]", sql="SELECT `dept_no`, `dept_name` FROM `dept` AS `dept` WHERE (`dept_no` = ?)")                                      |      |       Gather(concurrent=true) |      |         LogicalView(tables="emp[p1,p2]", shardCount=2, sql="SELECT `dept_no`, `salary` FROM `emp` AS `emp` WHERE ((`dept_no` = ?) AND (`dept_no` IN (...)))") |      | HitCache:true |      | Source:PLAN_CACHE |      | TemplateId: 466da528  |      +--------------------------------------------------------------------------------+      9 rows in set (0.01 sec) 使用不带任何选项的explain命令可以查看sql语句的逻辑执行计划,这条有两个LogicalView算子可以下推到dn节点执行,CN节点先做bkajoin,然后做哈希聚合。 如果想看优化器对每一步的估算成本,使用cost选项MySQL [test]> explain cost  select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b                  where a.dept_no=b.dept_no  and b.dept_no=30                    group by a.dept_no, b.dept_name;      +-----------------------------------------------------------------------------------+      | LOGICAL EXECUTIONPLAN                                                                             |      +-----------------------------------------------------------------------------------+      | HashAgg(group="dept_no,dept_name", sum(a.salary)="SUM(salary)"): rowcount = 1.0, cumulative cost = value = 1.0015027E7, cpu = 27.0, memory = 199.0, io = 3.0, net = 2.0                                                                                       |      |   Project(dept_no="dept_no0", salary="salary", dept_no0="dept_no", dept_name="dept_name"): rowcount = 1.0, cumulative cost = value = 1.0015012E7, cpu = 12.0, memory = 69.0, io = 3.0, net = 2.0                                                       |      |     BKAJoin(condition="dept_no = dept_no", type="inner"): rowcount = 1.0, cumulative cost = value = 1.0015011E7, cpu = 11.0, memory = 69.0, io = 3.0, net = 2.0                                                                                               |      |       LogicalView(tables="dept[p11]", sql="SELECT `dept_no`, `dept_name` FROM `dept` AS `dept` WHERE (`dept_no` = ?)"): rowcount = 1.0, cumulative cost = value = 5005002.0, cpu = 2.0, memory = 0.0, io = 1.0, net = 1.0                           |      |       Gather(concurrent=true): rowcount = 1.0, cumulative cost = value = 5003.0, cpu = 3.0, memory = 0.0, io = 1.0, net = 0.0                                               |      |         LogicalView(tables="emp[p1,p2]", shardCount=2, sql="SELECT `dept_no`, `salary` FROM `emp` AS `emp` WHERE ((`dept_no` = ?) AND (`dept_no` IN (...)))"): rowcount = 1.0, cumulative cost = value = 5002.0, cpu = 2.0, memory = 0.0, io = 1.0, net = 0.0 |      | HitCache:true                                                                                               |      | Source:PLAN_CACHE                                                                                    |      | WorkloadType: TP                                                                                         |      | TemplateId: 466da528                                                                                   |      +-----------------------------------------------------------------------------------+      10 rows in set (0.00 sec)上面可以看到每一个算子的估算函数,累加成本,消耗的cpu、内存、IO。使用analyze选项,在看到优化器估算的成本之外,也可以看到每个算子实际执行的资源消耗MySQL [test]> explain analyze  select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b                  where a.dept_no=b.dept_no  and b.dept_no=30                    group by a.dept_no, b.dept_name;      +-----------------------------------------------------------------------------------+      | LOGICAL EXECUTIONPLAN                                                                             |      +-----------------------------------------------------------------------------------+      | HashAgg(group="dept_no,dept_name", sum(a.salary)="SUM(salary)"): rowcount = 1.0, cumulative cost = value = 1.0015027E7, cpu = 27.0, memory = 199.0, io = 3.0, net = 2.0, actual time = 0.000 + 0.000, actual rowcount = 1, actual memory = 0, instances = 1                                                                                    |      |   Project(dept_no="dept_no0", salary="salary", dept_no0="dept_no", dept_name="dept_name"): rowcount = 1.0, cumulative cost = value = 1.0015012E7, cpu = 12.0, memory = 69.0, io = 3.0, net = 2.0, actual time = 0.003 + 0.001, actual rowcount = 27, actual memory = 0, instances = 1                                                  |      |     BKAJoin(condition="dept_no = dept_no", type="inner"): rowcount = 1.0, cumulative cost = value = 1.0015011E7, cpu = 11.0, memory = 69.0, io = 3.0, net = 2.0, actual time = 0.003 + 0.001, actual rowcount = 27, actual memory = 261876, instances = 1                                                                                           |      |       LogicalView(tables="dept[p11]", sql="SELECT `dept_no`, `dept_name` FROM `dept` AS `dept` WHERE (`dept_no` = ?)"): rowcount = 1.0, cumulative cost = value = 5005002.0, cpu = 2.0, memory = 0.0, io = 1.0, net = 1.0, actual time = 0.000 + 0.001, actual rowcount = 1, actual memory = 0, instances = 0                           |      |       Gather(concurrent=true): rowcount = 1.0, cumulative cost = value = 5003.0, cpu = 3.0, memory = 0.0, io = 1.0, net = 0.0, actual time = 0.000 + 0.000, actual rowcount = 0, actual memory = 0, instances = 0                                          |      |         LogicalView(tables="emp[p1,p2]", shardCount=2, sql="SELECT `dept_no`, `salary` FROM `emp` AS `emp` WHERE ((`dept_no` = ?) AND (`dept_no` IN (...)))"): rowcount = 1.0, cumulative cost = value = 5002.0, cpu = 2.0, memory = 0.0, io = 1.0, net = 0.0, actual time = 0.001 + 0.001, actual rowcount = 27, actual memory = 0, instances = 0 |      | HitCache:true                                                                                                 |      | Source:PLAN_CACHE                                                                                      |      | TemplateId: 466da528                                                                                   |      +----------------------------------------------------------------------------------+      9 rows in set (0.05 sec)从上面不但可以看到优化器估算的成本,也可以看到每个算子实际执行的时间,行数及内存。使用physical选项可以看到sql语句执行的片段及片段之间的依赖关系MySQL [test]> explain physical  select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b                  where a.dept_no=b.dept_no  and b.dept_no=30                    group by a.dept_no, b.dept_name;      +-----------------------------------------------------------------------------------+      | PLAN                                                                                                             |      +-----------------------------------------------------------------------------------+      | ExecutorMode: TP_LOCAL                                                               |      | Fragment 0 dependency: [] parallelism: 1                                         |      | Project(dept_no="dept_no0", salary="salary", dept_no0="dept_no", dept_name="dept_name")                                                                     |      |   BKAJoin(condition="dept_no = dept_no", type="inner")                |      |     LogicalView(tables="dept[p11]", sql="SELECT `dept_no`, `dept_name` FROM `dept` AS `dept` WHERE (`dept_no` = ?)")                                        |      |     Gather(concurrent=true)                                                                           |      |       LogicalView(tables="emp[p1,p2]", shardCount=2, sql="SELECT `dept_no`, `salary` FROM `emp` AS `emp` WHERE ((`dept_no` = ?) AND (`dept_no` IN (...)))") |      | Fragment 1 dependency: [0] parallelism: 2                                                   |      | HashAgg(group="dept_no,dept_name", sum(a.salary)="SUM(salary)")           |      |   RemoteSource(sourceFragmentIds=[0], type=RecordType(INTEGER dept_no, BIGINT salary, TINYINT(3) dept_no0, VARCHAR(20) dept_name))                          |      +-----------------------------------------------------------------------------------+      10 rows in set (0.05 sec)语句的执行分为两个片段,后面的哈希聚合片段依赖前面的投影、bkaJoin片段。4 物理执行计划查看查看物理执行计划使用execute选项,看到的是语句在dn节点执行部分的MySQL执行计划。 explain execute select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b  where a.dept_no=b.dept_no  and b.dept_no=30                    group by a.dept_no, b.dept_name\G;*************************** 1. row ***************************           id:  select_type: SIMPLE        table: dept   partitions: NULL         type: constpossible_keys: PRIMARY          key: PRIMARY      key_len: 1          ref: const         rows:     filtered: 100        Extra: NULL*************************** 2. row ***************************           id:  select_type: SIMPLE        table: emp   partitions: NULL         type: ALLpossible_keys: NULL          key: NULL      key_len: NULL          ref: NULL         rows:       ▒     filtered: 9.999999046325684        Extra: Using where2 rows in set (0.05 sec)ERROR: No query specified4 索引推荐explain的advisor选项可以为sql语句推荐合适的索引及表的分区形式。MySQL [test]> explain advisor  select a.dept_no,b.dept_name, sum(a.salary) from emp a, dept b                  where a.dept_no=b.dept_no  and b.dept_no=30                    group by a.dept_no, b.dept_name\G;*************************** 1. row ***************************IMPROVE_VALUE: 99.6%  IMPROVE_CPU: 207.2%  IMPROVE_MEM: 51.3%   IMPROVE_IO: -33.3%  IMPROVE_NET: 100.0% BEFORE_VALUE: 1.00100228E7   BEFORE_CPU: 22.7   BEFORE_MEM: 140.8    BEFORE_IO: 2   BEFORE_NET: 2  AFTER_VALUE: 5015007.4    AFTER_CPU: 7.3    AFTER_MEM: 93     AFTER_IO: 3    AFTER_NET: 1 ADVISE_INDEX: ALTER TABLE test.emp BROADCAST; ALTER TABLE test.dept BROADCAST;     NEW_PLAN:Gather(concurrent=true)  LogicalView(tables="emp,dept", shardCount=0, sql="SELECT `emp`.`dept_no`, `dept`.`dept_name`, SUM(`emp`.`salary`) AS `sum(a.salary)` FROM `emp` AS `emp` INNER JOIN `dept` AS `dept` ON (((`dept`.`dept_no` = ?) AND (`emp`.`dept_no` = `dept`.`dept_no`)) AND (`emp`.`dept_no` = ?)) GROUP BY `emp`.`dept_no`, `dept`.`dept_name`")         INFO: BROADCAST1 row in set (0.18 sec)ERROR: No query specified这里推荐是将这两个表都改为广播表。

mysql执行计划解读--大量示例sql语句执行计划
以mysql的官方示例数据库sakila作为示范表,演示mysql执行计划中各列的含义及如何用户性能诊断,文档中也包含了环境搭建的过程,包括数据库的安装及示例数据库的创建,对每个列的含义都有具体的sql语句作为示范 1 演示环境的搭建 1.1 MySQL数据库的安装 1.2 MySQL数据库初始化及启动 1.3 导入sakila示例数据库 1.4 主要示例表介绍 2 MySQL数据库执行计划 3 MySQL执行计划关键信息解释 3.1 id 3.2 select_type,possible keys, key,table,ref 3.3 type 3.4 extra 4 使用物化优化子查询