File upload and access exceptions thrown at one time to replace the FASTDFS data directory

Source: Internet
Author: User
Tags crc32 file url log log

1, picture access exception problem description

Build a new environment of the Fastdfs file server, just set up the time, uploaded a picture, normal, but because the port is not open, there is no authentication access problem. The environment is being shelved for testing.
Later in the test, because there are too many image files to upload, so directly copy the production environment DFS data directory, replace the data directory of the new environment, and the following files are still used in the new environment of the original files (/data/dfs is the data directory):

/data/dfs/tracker Directory
/data/dfs/group1/data/fdfs_storaged.pid
/data/dfs/group1/data/storage_stat.dat
/data/dfs/group1/data/storage_trunk.dat
/data/dfs/group1/data/sync Directory
/data/dfs/group1/data/trunk Directory

Then restarted the tracker, storage and nginx services, but found that the image is not accessible, the direct page is a blank, using curl access, there is no return, on the card there, can only Ctrl + C exit:

[[email protected] logs]#  curl  http://10.0.0.10:8090/groupA/M00/00/00/cErM6luMkf-IbhOWAAhHLHLDXwwAAAABQKwYD8ACEdE376.jpg-m#没有任何返回,只能Ctrl+c退出来。[[email protected] logs]#
Cause analysis 1, viewing ports and processes

Viewing the storage and tracker processes are also:

[[email protected] ~]#  ps -ef|grep storage.confroot      1126     1  0 14:57 ?        00:00:00 /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restartroot      5139  5071  0 15:13 pts/8    00:00:00 grep --color=auto storage.conf[[email protected] ~]#  ps -ef|grep tracker.confroot      5149  5071  0 15:13 pts/8    00:00:00 grep --color=auto tracker.confroot     30168     1  0 14:44 ?        00:00:00 /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart[[email protected] ~]#

View the ports of tracker, storage, all up, and the firewall has the appropriate ports open:

[[email protected] ~]# netstat -tlunp|grep 23000tcp        0      0 0.0.0.0:23000           0.0.0.0:*               LISTEN      1126/fdfs_storaged[[email protected] ~]# netstat -tlunp|grep 22122tcp        0      0 0.0.0.0:22122           0.0.0.0:*               LISTEN      30168/fdfs_trackerd

Then look at Nginx, found that the port is also up, but the process has an exception, there is only one master process:

[[email protected] sbin]# ps -ef|grep nginxroot       744 22962  0 10:37 pts/8    00:00:00 grep --color=auto nginxroot     29076     1  0 10:21 ?        00:00:00 nginx: master process ./nginx     #只有一个master进程[[email protected] sbin]#
2. View Logs

View Nginx log, repeat the following lines of error in Error_log

ngx_http_fastdfs_process_init pid=29077[2018-09-05 10:21:46] ERROR - file: shared_func.c, line: 960, open file /etc/fdfs/mod_fastdfs.conf fail, errno: 13, error info: Permission denied[2018-09-05 10:21:46] ERROR - file: /usr/local/fastdfs-nginx-module/src/common.c, line: 155, load conf file "/etc/fdfs/mod_fastdfs.conf" fail, ret code: 132018/09/05 10:21:46 [alert] 29076#0: worker process 29077 exited with fatal code 2 and cannot be respawned

According to the error message Permission denied and some blog posts on the web, while comparing the production environment Fastdfs server/etc/dfs directory permissions, try to modify the/etc/dfs directory permissions, changed to 755, Restart Tracker, storage, Nginx services:

# chmod 755 /etc/fdfs# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart# /usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart# cd /usr/local/nginx/sbin/# ./nginx -s reload

Then look at the nginx process, there is the worker process:

[[email protected] ~]# ps -ef|grep nginxnobody     363 29076  0 14:55 ?        00:00:00 nginx: worker process         #有worker进程了root      8456  8381  0 15:27 pts/8    00:00:00 grep --color=auto nginxroot     29076     1  0 10:21 ?        00:00:00 nginx: master process ./nginx[[email protected] ~]#

To access the picture, there is content returned:

[[email protected] logs]#  curl  http://10.0.0.10:8090/groupA/M00/00/00/cErM6luMkf-IbhOWAAhHLHLDXwwAAAABQKwYD8ACEdE376.jpg-mfileExtNamejpgfileLength542508fileNameIMG_1171.jpg[[email protected] logs]#

Let the test test in the new environment, the picture can be accessed normally.

2, picture upload exception problem description

When the test feedback picture access can be normal, in order to insure, tested a upload function, found that the upload picture problems:

[[email protected] ~]#/usr/bin/fdfs_test/etc/fdfs/client.conf upload/tmp/test/test10.pngthis is FastDFS client Test program V5.05copyright (C), Happy Fish/yuqingfastdfs is copied only under the terms of the GNU Generalpubl IC License V3, which May is found in the FASTDFS source kit. Visit the Fastdfs Home Page http://www.csource.org/for more detail. [2018-09-05 11:08:23] Debug-base_path=/data/dfs/tracker, connect_timeout=30, network_timeout=60, Tracker_server_count=1, Anti_steal_ Token=0, Anti_steal_secret_key length=0, Use_connection_pool=0, g_connection_pool_max_idle_time=3600s, Use_storage_ Id=0, Storage server ID count:0tracker_query_storage_store_list_without_group:server 1. Group_name=, ip_addr=10.0.0.10, Port=23000group_name=groupa, ip_addr=10.0.0.10, port=23000[2018-09-05 11:08:23] ERROR-FILE:TRACKER_PROTO.C, line:48, server:10.0.0.10:23000, response status!! = 0storage_upload_by_filenameupload File fail, error no:17, error Info:file exIsts #这里报File exists, but changed a lot of new pictures upload, all reported this error [[email protected] ~]# 
Cause analysis

Check the Storaged.log log, found in the upload file, the log will print out these lines of error messages:

[2018-09-05 11:15:45] ERROR - file: storage_dio.c, line: 885, trunk file: /data/dfs/group1/data/00/00/000001, offset: 299076 already occupied by other file, trunk header info: file_type=-88, alloc_size=-1127393023, file_size=-397478323, crc32=485419875, mtime=-592647312, ext_name(7)=(<\?f§r[2018-09-05 11:15:45] WARNING - file: trunk_mgr/trunk_mem.c, line: 1620, trunk space already be occupied, delete this trunk space, trunk info: store_path_index=0, sub_path_high=0, sub_path_low=0, id=1, offset=299076, size=24885, status=1

The first time encountered this problem, so the error message in Baidu and Google for a long time, did not find the corresponding solution. Since the Fastdfs service was just installed, the image upload function was normal and the suspect could be the problem caused by the replacement data directory. So try to restore the data directory to the original installed data directory, found to be able to upload the normal. That's the problem with the data catalog.

Then read the wrong message, pointing out that it is related to/data/dfs/group1/data/00/00/000001. and copy the past data directory with the production environment, you can only access files, but not upload. So combined with the error message, re-use the online environment of the set of data directory. Except for the above mentioned

/data/dfs/tracker Directory
/data/dfs/group1/data/fdfs_storaged.pid
/data/dfs/group1/data/storage_stat.dat
/data/dfs/group1/data/storage_trunk.dat
/data/dfs/group1/data/sync Directory
/data/dfs/group1/data/trunk Directory

This part of the file uses the new environment itself as a file, and also replaces the/data/dfs/group1/data/00/00/000001 file with the original 000001 file of the new environment itself . Then restart the tracker, storage, Nginx service, found that the upload function restored:

[[email protected] data]#/usr/bin/fdfs_test/etc/fdfs/client.conf upload/tmp/test/test2.png This is FASTDFS Client test program V5.05copyright (C), Happy Fish/yuqingfastdfs is copied only under the terms of the GNU generalpublic License V3, which May is found in the FASTDFS source kit. Visit the Fastdfs Home Page http://www.csource.org/for more detail. [2018-09-05 15:40:56] Debug-base_path=/data/dfs/tracker, connect_timeout=30, network_timeout=60, Tracker_server_count=1, Anti_steal_ Token=0, Anti_steal_secret_key length=0, Use_connection_pool=0, g_connection_pool_max_idle_time=3600s, Use_storage_ Id=0, Storage server ID count:0tracker_query_storage_store_list_without_group:server 1. Group_name=, ip_addr=10.0.0.10, Port=23000group_name=groupa, ip_addr=10.0.0.10, Port=23000storage_upload_by_ Filenamegroup_name=groupa, RemoTe_filename=m00/00/00/rdn5clupiiiiv-beaabfirkntkmaaaaaqagyysaaeu5442.pngsource IP address:10.0.0.10file Timestamp =2018-09-05 15:40:56file size=17697file crc32=420302403example file url:http://10.0.0.10:8090/groupa/m00/00/00/ Rdn5clupiiiiv-beaabfirkntkmaaaaaqagyysaaeu5442.pngstorage_upload_slave_by_filenamegroup_name=groupa, Remote_ Filename=m00/00/00/rdn5clupiiiiv-beaabfirkntkmaaaaaqagyysaaeu5442_big.pngsource IP address:10.0.0.10file timestamp=2018-09-05 15:40:56file size=17697file crc32=420302403example file url:http://10.0.0.10:8090/groupa/m00/ 00/00/rdn5clupiiiiv-beaabfirkntkmaaaaaqagyysaaeu5442_big.png[[email protected] data]#

To view the data directory, this file is also present:

[[email protected] data]#  ll /data/dfs/group1/data/00/00/|grep 5442-rw-r--r-- 1 root root    17697 Sep  5 15:40 rDN5CluPiIiIV-bEAABFIRkNTkMAAAAAQAGYYsAAEU5442_big.png-rw-r--r-- 1 root root       49 Sep  5 15:40 rDN5CluPiIiIV-bEAABFIRkNTkMAAAAAQAGYYsAAEU5442_big.png-m-rw-r--r-- 1 root root       49 Sep  5 15:40 rDN5CluPiIiIV-bEAABFIRkNTkMAAAAAQAGYYsAAEU5442.png-m[[email protected] data]#

You can see the picture as well:

[[email protected] data]# curl http://10.0.0.10:8090/groupA/M00/00/00/rDN5CluPiIiIV-bEAABFIRkNTkMAAAAAQAGYYsAAEU5442_big.png-mext_namejpgfile_size115120height80width160[[email protected] data]#[[email protected] data]#[[email protected] data]# curl http://10.0.0.10:8090/groupA/M00/00/00/rDN5CluPiIiIV-bEAABFIRkNTkMAAAAAQAGYYsAAEU5442.png-mext_namejpgfile_size115120height80width160[[email protected] data]#

in browser access
Http://10.0.0.10:8090/groupA/M00/00/00/rDN5CluPiIiIV-bEAABFIRkNTkMAAAAAQAGYYsAAEU5442_big.png
You can also see the picture:

Xiao Kee:
In conclusion, after replacing the data catalog of the new environment with the production environment FASTDFS data Catalog, the following actions need to be done:
1. Empty the log file inside the production environment data directory;
2, the following documents are used in the new environment itself files:

/data/dfs/tracker Directory
/data/dfs/group1/data/fdfs_storaged.pid
/data/dfs/group1/data/storage_stat.dat
/data/dfs/group1/data/storage_trunk.dat
/data/dfs/group1/data/sync Directory
/data/dfs/group1/data/trunk Directory
/data/dfs/group1/data/00/00/000001

File upload and access exceptions thrown at one time to replace the FASTDFS data directory

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.