我正在尝试使用Avro来读取/写入Kafka的消息.有没有人有一个使用Avro二进制编码器编码/解码将被放在消息队列中的数据的例子?
我需要Avro部件而不是Kafka部件.或者,也许我应该看一个不同的解决方案?基本上,我正试图在空间方面找到更有效的JSON解决方案.刚刚提到Avro,因为它比JSON更紧凑.
这是一个基本的例子.我没有尝试过多个分区/主题.
//示例生产者代码
import org.apache.avro.Schema; import org.apache.avro.generic.GenericData; import org.apache.avro.generic.GenericRecord; import org.apache.avro.io.*; import org.apache.avro.specific.SpecificDatumReader; import org.apache.avro.specific.SpecificDatumWriter; import org.apache.commons.codec.DecoderException; import org.apache.commons.codec.binary.Hex; import kafka.javaapi.producer.Producer; import kafka.producer.KeyedMessage; import kafka.producer.ProducerConfig; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException; import java.nio.charset.Charset; import java.util.Properties; public class ProducerTest { void producer(Schema schema) throws IOException { Properties props = new Properties(); props.put("metadata.broker.list", "0:9092"); props.put("serializer.class", "kafka.serializer.DefaultEncoder"); props.put("request.required.acks", "1"); ProducerConfig config = new ProducerConfig(props); Producerproducer = new Producer (config); GenericRecord payload1 = new GenericData.Record(schema); //Step2 : Put data in that genericrecord object payload1.put("desc", "'testdata'"); //payload1.put("name", "?asa"); payload1.put("name", "dbevent1"); payload1.put("id", 111); System.out.println("Original Message : "+ payload1); //Step3 : Serialize the object to a bytearray DatumWriter writer = new SpecificDatumWriter (schema); ByteArrayOutputStream out = new ByteArrayOutputStream(); BinaryEncoder encoder = EncoderFactory.get().binaryEncoder(out, null); writer.write(payload1, encoder); encoder.flush(); out.close(); byte[] serializedBytes = out.toByteArray(); System.out.println("Sending message in bytes : " + serializedBytes); //String serializedHex = Hex.encodeHexString(serializedBytes); //System.out.println("Serialized Hex String : " + serializedHex); KeyedMessage message = new KeyedMessage ("page_views", serializedBytes); producer.send(message); producer.close(); } public static void main(String[] args) throws IOException, DecoderException { ProducerTest test = new ProducerTest(); Schema schema = new Schema.Parser().parse(new File("src/test_schema.avsc")); test.producer(schema); } }
//消费者代码示例
第1部分:使用者组代码:因为您可以为多个分区/主题拥有多个使用者.
import kafka.consumer.ConsumerConfig; import kafka.consumer.KafkaStream; import kafka.javaapi.consumer.ConsumerConnector; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Properties; import java.util.concurrent.Executor; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; /** * Created by on 9/1/15. */ public class ConsumerGroupExample { private final ConsumerConnector consumer; private final String topic; private ExecutorService executor; public ConsumerGroupExample(String a_zookeeper, String a_groupId, String a_topic){ consumer = kafka.consumer.Consumer.createJavaConsumerConnector( createConsumerConfig(a_zookeeper, a_groupId)); this.topic = a_topic; } private static ConsumerConfig createConsumerConfig(String a_zookeeper, String a_groupId){ Properties props = new Properties(); props.put("zookeeper.connect", a_zookeeper); props.put("group.id", a_groupId); props.put("zookeeper.session.timeout.ms", "400"); props.put("zookeeper.sync.time.ms", "200"); props.put("auto.commit.interval.ms", "1000"); return new ConsumerConfig(props); } public void shutdown(){ if (consumer!=null) consumer.shutdown(); if (executor!=null) executor.shutdown(); System.out.println("Timed out waiting for consumer threads to shut down, exiting uncleanly"); try{ if(!executor.awaitTermination(5000, TimeUnit.MILLISECONDS)){ } }catch(InterruptedException e){ System.out.println("Interrupted"); } } public void run(int a_numThreads){ //Make a map of topic as key and no. of threads for that topic MaptopicCountMap = new HashMap (); topicCountMap.put(topic, new Integer(a_numThreads)); //Create message streams for each topic Map >> consumerMap = consumer.createMessageStreams(topicCountMap); List > streams = consumerMap.get(topic); //initialize thread pool executor = Executors.newFixedThreadPool(a_numThreads); //start consuming from thread int threadNumber = 0; for (final KafkaStream stream : streams) { executor.submit(new ConsumerTest(stream, threadNumber)); threadNumber++; } } public static void main(String[] args) { String zooKeeper = args[0]; String groupId = args[1]; String topic = args[2]; int threads = Integer.parseInt(args[3]); ConsumerGroupExample example = new ConsumerGroupExample(zooKeeper, groupId, topic); example.run(threads); try { Thread.sleep(10000); } catch (InterruptedException ie) { } example.shutdown(); } }
第2部分:实际消费消息的单个消费者.
import kafka.consumer.ConsumerIterator; import kafka.consumer.KafkaStream; import kafka.message.MessageAndMetadata; import org.apache.avro.Schema; import org.apache.avro.generic.GenericRecord; import org.apache.avro.generic.IndexedRecord; import org.apache.avro.io.DatumReader; import org.apache.avro.io.Decoder; import org.apache.avro.io.DecoderFactory; import org.apache.avro.specific.SpecificDatumReader; import org.apache.commons.codec.binary.Hex; import java.io.File; import java.io.IOException; public class ConsumerTest implements Runnable{ private KafkaStream m_stream; private int m_threadNumber; public ConsumerTest(KafkaStream a_stream, int a_threadNumber) { m_threadNumber = a_threadNumber; m_stream = a_stream; } public void run(){ ConsumerIteratorit = m_stream.iterator(); while(it.hasNext()) { try { //System.out.println("Encoded Message received : " + message_received); //byte[] input = Hex.decodeHex(it.next().message().toString().toCharArray()); //System.out.println("Deserializied Byte array : " + input); byte[] received_message = it.next().message(); System.out.println(received_message); Schema schema = null; schema = new Schema.Parser().parse(new File("src/test_schema.avsc")); DatumReader reader = new SpecificDatumReader (schema); Decoder decoder = DecoderFactory.get().binaryDecoder(received_message, null); GenericRecord payload2 = null; payload2 = reader.read(null, decoder); System.out.println("Message received : " + payload2); }catch (Exception e) { e.printStackTrace(); System.out.println(e); } } } }
测试AVRO架构:
{ "namespace": "xyz.test", "type": "record", "name": "payload", "fields":[ { "name": "name", "type": "string" }, { "name": "id", "type": ["int", "null"] }, { "name": "desc", "type": ["string", "null"] } ] }
需要注意的重要事项是:
你需要标准的kafka和avro jar才能运行这个代码.
非常重要的props.put("serializer.class","kafka.serializer.DefaultEncoder"); 唐t use stringEncoder as that won
如果您要发送一个字节数组作为信息科技工作.
您可以将byte []转换为十六进制字符串,然后将该字符串和使用者重新发送十六进制字符串发送到byte [],然后发送到原始消息.
运行如下所述的zookeeper和代理: - http://kafka.apache.org/documentation.html#quickstart并创建一个名为"page_views"的主题或任何你想要的主题.
运行ProducerTest.java,然后运行ConsumerGroupExample.java,查看正在生成和使用的avro数据.
我终于记得要问Kafka邮件列表并得到以下答案,这完美地运作了.
是的,您可以将消息作为字节数组发送.如果你看一下Message类的构造函数,你会看到 -
def this(bytes:Array [Byte])
现在,看一下Producer的send()API -
def send(producerData:ProducerData [K,V]*)
您可以将V设置为Message和K类型,使其成为您想要的密钥.如果您不关心使用密钥进行分区,那么也将其设置为Message类型.
谢谢,Neha
如果要从Avro消息中获取字节数组(已经回答了kafka部分),请使用二进制编码器:
GenericDatumWriterwriter = new GenericDatumWriter (schema); ByteArrayOutputStream os = new ByteArrayOutputStream(); try { Encoder e = EncoderFactory.get().binaryEncoder(os, null); writer.write(record, e); e.flush(); byte[] byteData = os.toByteArray(); } finally { os.close(); }