Recently, the data format stored in HDFS is incorrect because the data contains \ r \ n characters, which are not taken into account during program processing. Historical data is about one year old. You need to delete the wrong data or duplicate data to keep the correct data. Pig is used in the project for data processing, so I wrote a UDF Java class to filter
For the wrong data, save the wrong data and the correct data separately, and then write the schema and number of the following script statistics, and record them. For future projects, see.
#! /Bin/shcurdir = 'CD "$ (dirname $0)"; pwd' Summary () {files = "" printf "job \ ttotalqueries \ tgoodqueries \ tbadqueries \ n"> $2 while read Job Do if [-Z files]; then files = "$ job/PAR *" else files = "$ files $ job/PAR *" fi totalqueries = 'hadoop FS-text $ job/PAR * | WC-l' goodqueries = 'hadoop FS-text/user/chran/TXT $ job/PAR * | WC-l 'badqueries = 'hadoop FS-text/user/chran/TXT/error $ job /PAR * | WC-L '# distinctqueri Es = 'hadoop FS-text $ job/PAR * | awk-F' \ a' {print NF} '| sort | uniq 'printf "$ job \ t $ totalqueries \ T $ goodqueries \ t $ badqueries \ n ">>2 2 done <$1} Check () {tempdir = $ curdir/temp if [! -D $ tempdir]; then mkdir-p $ tempdir fi # Clean up result files output = $ tempdir/$2 RM $ output if! Hadoop FS-test-d $1; then Echo "$1 in HDFS doesn't exist" Exit-1 fi # list all sub folders folderlist = $ tempdir/$2. folderlist. temp # hadoop FS-ls $1 | awk '{print $ NF}' | uniq | sort> $ folderlist hadoop FS-LSR $1 | grep "/[0-9] [0-9] \ $ "| grep" 00 \ $ "| awk '{print $ NF}' | uniq | sort> $ folderlist summary $ folderlist $ output RM $ folderlist}
check "/apps/risk/ars/social/raw/SOCIAL_FACEBOOK_RAW" "check_facebook.output.txt"