Nginx error, Flume collection, too much bugs Netstat-ntpl

Source: Internet
Author: User
Tags commit serialization jboss
Netstat-ntpl[root@bigdatahadoop sbin]#./nginx-t-c/usr/tengine-2.1.0/conf/nginx.conf
Nginx: [Emerg] "upstream" directive is isn't allowed here in/usr/tengine-2.1.0/conf/nginx.conf:47
Configuration file/usr/tengine-2.1.0/conf/nginx.conf test Failed


One more}.


16/06/26 14:06:01 WARN node. Abstractconfigurationprovider:no configuration found for this host:clin1

Java environment variable "This may not be wrong"


Org.apache.commons.cli.ParseException:The specified configuration file does not exist:/usr/apache-flume-1.6.0-bin/ Bin/ile


Error 1, conf file = no space on either side of the equal sign "this may not be wrong"

Error 2, start command error

Error Bin/flume-ng agent-n clei-c conf-file/usr/apache-flume-1.6.0-bin/conf/test2-dflume.root.logger=info,console

Correct flume-ng agent--conf conf--conf-file test3--name A1-dflume.root.logger=info,console



16/06/26 18:08:45 ERROR Source. SpoolDirectorySource:FATAL:Spool Directory source R1: {spooldir:/opt/sqooldir}: uncaught exception in Spooldirectorys Ource thread. Restart or reconfigure Flume to continue processing.
Java.nio.charset.MalformedInputException:Input length = 1
At Java.nio.charset.CoderResult.throwException (coderresult.java:277)
At Org.apache.flume.serialization.ResettableFileInputStream.readChar (resettablefileinputstream.java:195)
At Org.apache.flume.serialization.LineDeserializer.readLine (linedeserializer.java:133)
At Org.apache.flume.serialization.LineDeserializer.readEvent (linedeserializer.java:71)
At Org.apache.flume.serialization.LineDeserializer.readEvents (linedeserializer.java:90)
At Org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents (Reliablespoolingfileeventreader.java : 252)
At Org.apache.flume.source.spooldirectorysource$spooldirectoryrunnable.run (spooldirectorysource.java:228)
At Java.util.concurrent.executors$runnableadapter.call (executors.java:471)
At Java.util.concurrent.FutureTask.runAndReset (futuretask.java:304)
At java.util.concurrent.scheduledthreadpoolexecutor$scheduledfuturetask.access$301 ( scheduledthreadpoolexecutor.java:178)
At Java.util.concurrent.scheduledthreadpoolexecutor$scheduledfuturetask.run (Scheduledthreadpoolexecutor.java : 293)
At Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1145)
At Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:615)
At Java.lang.Thread.run (thread.java:745)


Can't upload compressed file, file name problem file, estimate video file is even worse



16/06/26 18:18:59 INFO IPC. Nettyserver: [id:0x6fef6466,/192.168.184.188:40594 =>/192.168.184.188:44444] CONNECTED:/192.168.184.188:40594
16/06/26 18:19:05 INFO HDFs. Hdfsdatastream:serializer = TEXT, Userawlocalfilesystem = False
16/06/26 18:19:08 INFO HDFs. Bucketwriter:creating hdfs://bigdatastorm:8020/flume/data/flumedata.1466936345775.tmp
16/06/26 18:19:18 WARN HDFs. Hdfseventsink:hdfs IO Error
Java.io.IOException:Callable timed out after 10000 ms on file:hdfs://bigdatastorm:8020/flume/data/ Flumedata.1466936345775.tmp
At Org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout (bucketwriter.java:693)
At Org.apache.flume.sink.hdfs.BucketWriter.open (bucketwriter.java:235)
At Org.apache.flume.sink.hdfs.BucketWriter.append (bucketwriter.java:514)
At Org.apache.flume.sink.hdfs.HDFSEventSink.process (hdfseventsink.java:418)
At Org.apache.flume.sink.DefaultSinkProcessor.process (defaultsinkprocessor.java:68)
At Org.apache.flume.sinkrunner$pollingrunner.run (sinkrunner.java:147)
At Java.lang.Thread.run (thread.java:745)
caused by:java.util.concurrent.TimeoutException
At Java.util.concurrent.FutureTask.get (futuretask.java:201)
At Org.apache.flume.sink.hdfs.BucketWriter.callWithTimeout (bucketwriter.java:686)
... 6 more
16/06/26 18:19:24 INFO HDFs. Bucketwriter:creating hdfs://bigdatastorm:8020/flume/data/flumedata.1466936345776.tmp
16/06/26 18:19:38 INFO HDFs. Bucketwriter:closing Idle Bucketwriter hdfs://bigdatastorm:8020/flume/data/flumedata.1466936345776.tmp at 1466936378715
16/06/26 18:19:38 INFO HDFs. Bucketwriter:closing hdfs://bigdatastorm:8020/flume/data/flumedata.1466936345776.tmp
16/06/26 18:19:39 INFO HDFs. Bucketwriter:renaming hdfs://bigdatastorm:8020/flume/data/flumedata.1466936345776.tmp to hdfs://bigdatastorm:8020 /flume/data/flumedata.1466936345776
16/06/26 18:19:39 INFO HDFs. Hdfseventsink:writer callback called.




Strange and strange Mistakes


16/06/26 22:55:52 INFO HDFs. Hdfseventsink:writer callback called.
16/06/26 22:55:52 INFO HDFs. Hdfseventsink:bucket is closed while trying to append, reinitializing Bucket and writing event.
16/06/26 22:55:52 INFO HDFs. Hdfsdatastream:serializer = TEXT, Userawlocalfilesystem = False
16/06/26 22:55:53 INFO HDFs. Bucketwriter:creating hdfs://mycluster/flume/data/16-06-26/flumedata.1466952952985.tmp
16/06/26 22:55:56 INFO HDFs. Dfsclient:exception in Createblockoutputstream
Java.net.ConnectException:Connection refused
At Sun.nio.ch.SocketChannelImpl.checkConnect (Native method)
At Sun.nio.ch.SocketChannelImpl.finishConnect (socketchannelimpl.java:739)
At Org.apache.hadoop.net.SocketIOWithTimeout.connect (socketiowithtimeout.java:206)
At Org.apache.hadoop.net.NetUtils.connect (netutils.java:529)
At Org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline (dfsoutputstream.java:1526)
At Org.apache.hadoop.hdfs.dfsoutputstream$datastreamer.createblockoutputstream (DFSOutputStream.java:1328)
At Org.apache.hadoop.hdfs.dfsoutputstream$datastreamer.nextblockoutputstream (dfsoutputstream.java:1281)
At Org.apache.hadoop.hdfs.dfsoutputstream$datastreamer.run (dfsoutputstream.java:526)
16/06/26 22:55:56 INFO HDFs. Dfsclient:abandoning bp-731760634-192.168.184.188-1463927117534:blk_1073742940_2145
16/06/26 22:55:56 INFO HDFs. Dfsclient:excluding Datanode 192.168.184.188:50010
16/06/26 22:56:01 INFO HDFs. Bucketwriter:closing Idle Bucketwriter hdfs://mycluster/flume/data/16-06-26/flumedata.1466952952985.tmp at 1466952961311
16/06/26 22:56:01 INFO HDFs. Bucketwriter:closing hdfs://mycluster/flume/data/16-06-26/flumedata.1466952952985.tmp
16/06/26 22:56:01 INFO HDFs. Bucketwriter:renaming hdfs://mycluster/flume/data/16-06-26/flumedata.1466952952985.tmp to hdfs://mycluster/flume/ data/16-06-26/flumedata.1466952952985
16/06/26 22:56:01 INFO HDFs. Hdfseventsink:writer callback called.





16/06/26 23:12:10 ERROR Source. Avrosource:avro source r1:unable to process event batch. Exception follows.
Org.apache.flume.ChannelException:Unable to put batch on required channel:org.apache.flume.channel.memorychannel{ NAME:C1}
At Org.apache.flume.channel.ChannelProcessor.processEventBatch (channelprocessor.java:200)
At Org.apache.flume.source.AvroSource.appendBatch (avrosource.java:386)
At Sun.reflect.GeneratedMethodAccessor4.invoke (Unknown Source)
At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:43)
At Java.lang.reflect.Method.invoke (method.java:606)
At Org.apache.avro.ipc.specific.SpecificResponder.respond (specificresponder.java:91)
At Org.apache.avro.ipc.Responder.respond (responder.java:151)
At Org.apache.avro.ipc.nettyserver$nettyserveravrohandler.messagereceived (nettyserver.java:188)
At Org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream (simplechannelupstreamhandler.java:70)
At Org.apache.avro.ipc.nettyserver$nettyserveravrohandler.handleupstream (nettyserver.java:173)
At Org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (defaultchannelpipeline.java:558)
At Org.jboss.netty.channel.defaultchannelpipeline$defaultchannelhandlercontext.sendupstream ( defaultchannelpipeline.java:786)
At Org.jboss.netty.channel.Channels.fireMessageReceived (channels.java:296)
At Org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived (framedecoder.java:458)
At Org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode (framedecoder.java:439)
At Org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived (framedecoder.java:303)
At Org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream (simplechannelupstreamhandler.java:70)
At Org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (defaultchannelpipeline.java:558)
At Org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (defaultchannelpipeline.java:553)
At Org.jboss.netty.channel.Channels.fireMessageReceived (channels.java:268)
At Org.jboss.netty.channel.Channels.fireMessageReceived (channels.java:255)
At Org.jboss.netty.channel.socket.nio.NioWorker.read (nioworker.java:84)
At Org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys (abstractnioworker.java:471)
At Org.jboss.netty.channel.socket.nio.AbstractNioWorker.run (abstractnioworker.java:332)
At Org.jboss.netty.channel.socket.nio.NioWorker.run (nioworker.java:35)
At Org.jboss.netty.util.ThreadRenamingRunnable.run (threadrenamingrunnable.java:102)
At Org.jboss.netty.util.internal.deadlockproofworker$1.run (deadlockproofworker.java:42)
At Java.util.concurrent.ThreadPoolExecutor.runWorker (threadpoolexecutor.java:1145)
At Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:615)
At Java.lang.Thread.run (thread.java:745)
caused by:org.apache.flume.ChannelFullException:Space for commit to queue couldn ' t is acquired. Sinks are likely not keeping up and sources, or the buffer size is too tight
At Org.apache.flume.channel.memorychannel$memorytransaction.docommit (memorychannel.java:130)
At Org.apache.flume.channel.BasicTransactionSemantics.commit (basictransactionsemantics.java:151)
At Org.apache.flume.channel.ChannelProcessor.processEventBatch (channelprocessor.java:192)
... More
16/06/26 23:12:10 INFO ipc.nettyserver: [Id:0x83b4ecf4,/192.168.184.188:43082:>/192.168.184.188:44444] DISCONNECTED
16/06/26 23:12:10 INFO ipc.nettyserver: [Id:0x83b4ecf4,/192.168.184.188:43082:>/192.168.184.188:44444] UNBOUND
16/06/26 23:12:10 INFO ipc.nettyserver: [Id:0x83b4ecf4,/192.168.184.188:43082:>/192.168.184.188:44444] CLOSED
16/06/26 23:12:10 INFO ipc.NettyServer:Connection to/192.168.184.188:43082 disconnected.
16/06/26 23:12:12 INFO HDFs. Hdfsdatastream:serializer = TEXT, Userawlocalfilesystem = False
16/06/26 23:12:12 INFO HDFs. Bucketwriter:creating hdfs://mycluster/flume/data/16-06-26/flumedata.1466953932153.tmp
16/06/26 23:12:15 INFO ipc.nettyserver: [Id:0xcab3190c,/192.168.184.188:43085 =>/192.168.184.188:44444] OPEN
16/06/26 23:12:15 INFO IPC. Nettyserver: [Id:0xcab3190c,/192.168.184.188:43085 =>/192.168.184.188:44444] BOUND:/192.168.184.188:44444
16/06/26 23:12:15 INFO IPC. Nettyserver: [Id:0xcab3190c,/192.168.184.188:43085 =>/192.168.184.188:44444] CONNECTED:/192.168.184.188:43085
16/06/26 23:12:38 INFO HDFs. Bucketwriter:closing hdfs://mycluster/flume/data/16-06-26/flumedata.1466953932153.tmp





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.