This is a creation in Article, where the information may have evolved or changed.
There are few examples of Golang connection Kafka, but there is less support for offset trackers, but it is also a basic requirement. This requirement is met by "Github.com/bsm/sarama-cluster" combined with "Github.com/shopify/sarama".
Environment:
Golang 1.7
Kafka 0.10
centos7.2
Package Mainimport ("FMT" "OS" "Strings" "Time" "Github.com/shopify/sarama" "Github.com/bsm/sarama-cluster"Support Automatic Consumer-group Rebalancing andOffset tracking"Github.com/golang/glog") Func main () {GroupID: ="Group-1"Topiclist: ="Topic_1"Config: = cluster. Newconfig() config. Consumer. Return. Errors= True Config. Group. Return. Notifications= True Config. Consumer. Offsets. Commitinterval=1* Time. SecondConfig. Consumer. Offsets. Initial= Sarama. OffsetnewestInitial start from the latest offset C, err: = Cluster. Newconsumer(Strings. Split("localhost:9092",","), GroupID, strings. Split(Topiclist,","), config) if err! = Nil {Glog. Errorf("Failed Open Consumer:%v", err) return} defer C. Close() go func () {for err: = Range C. Errors() {Glog. Errorf("Error:%s\n"Err. Error())}} () go func () {for NOTE: = Range C. Notifications() {Glog. Infof("rebalanced:%+v\n", note)}} () for msg: = Range C. Messages() {FMT. fprintf(OS. Stdout,"%s/%d/%d\t%s\n", MSG. Topic, MSG. Partition, MSG. Offset, MSG. Value) c. Markoffset(MSG,""//markoffset is not written to Kafka in real time, it is possible to discard uncommitted offset} when the program crash
Reference:
Http://pastebin.com/9ZsnP2eU
Https://github.com/Shopify/sarama
Https://github.com/bsm/sarama-cluster