HDFs is almost full, so add a new disk to the node (this is a lesson, when you use a new node later, the disk needs to be filled at once, to save the subsequent addition of the disk)
Attention:
When adding a disk, not only datanode the node is added at configuration time, NodeManager also needs to be added.
Then you need to modify the configuration in the CHM (I have a special case, there is a machine hard disk port is broken, so you need to go into the instance to configure each instance separately)
Mkdir/data4/dfs
Mkdir/data4/yarn
Rm-rf/data4/lost+found
Mkdir/data4/dfs/dn
Chown-r Hdfs:hadoop/data4/dfs/dn
Mkdir/data4/yarn/nm
Chown-r yarn:hadoop/data4/yarn/nm
Mkdir/data5/dfs
Mkdir/data5/yarn
Rm-rf/data5/lost+found
Mkdir/data5/dfs/dn
Chown-r Hdfs:hadoop/data5/dfs/dn
Mkdir/data5/yarn/nm
Chown-r yarn:hadoop/data5/yarn/nm
Mkdir/data6/dfs
Mkdir/data6/yarn
Rm-rf/data6/lost+found
Mkdir/data6/dfs/dn
Chown-r Hdfs:hadoop/data6/dfs/dn
Mkdir/data6/yarn/nm
Chown-r yarn:hadoop/data6/yarn/nm
ll/data4/dfs/
ll/data4/yarn/
ll/data5/dfs/
ll/data5/yarn/
ll/data6/dfs/
ll/data6/yarn/
"Original" Add a new disk