消息队列——kafka——java集成与操作

目录

一、java集成kafka:

1、所需环境:

2、引入jar包:

3、创建基础Properties:

(1)单机kafka的基础Properties:

(2)集群kafka的基础Properties:

4、创建生产者公共参数Properties:

5、创建消费者公共参数Properties:

二、Java操作kafka:

1、创建topic:

2、查看所有topic:

3、删除topic:

4、发送string类型消息:

5、消费string类型消息:

6、发送object类型消息:

(1)创建实体类:

(2)创建对象序列化为字节数组工具方法:

(3)创建对象编码类:

(4)发送object类型消息:

7、消费object类型消息:

(1)创建字节数组转对象工具方法:

(2)创建对象解码类:

(3)消费object类型消息:


 

一、java集成kafka:

1、所需环境:

需要jdk1.8及以上环境,并且项目编译、idea中language level都必须要使用jdk1.8。

2、引入jar包:

引入kafka客户端所需要的jar包,需要注意jar包的版本要与服务器上安装kafka版本保持一致。

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.12</artifactId>
            <version>2.2.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.2.0</version>
        </dependency>

3、创建基础Properties:

通过配置kafka服务端的访问地址获取初始化的Properties对象,此对象是kafka客户端的根对象。

(1)单机kafka的基础Properties:

单机kafka只需要输入单个kafka访问地址即可:

//获取kafka客户端Properties对象
    public static Properties getKafkaProperties(){
        Properties props = new Properties();
        //单机kafka服务端配置方式
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.141:9091");
        return props;
    }

(2)集群kafka的基础Properties:

集群kafka就是输入集群中kafka各个节点访问地址:

//获取kafka客户端Properties对象
    public static Properties getKafkaProperties(){
        Properties props = new Properties();
        //集群kafka服务端配置方式
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.141:9091,192.168.0.142:9092,192.168.0.143:9093");
        return props;
    }

4、创建生产者公共参数Properties:

定义一个公共的方法,用来封装生产者公共参数,方便以后修改生产者的基本公共参数。

//获取kafka生产者公共配置信息对象
    public static Properties getProducerCommonProperties(Properties properties){
        properties.put(ProducerConfig.ACKS_CONFIG, "all");
        properties.put(ProducerConfig.RETRIES_CONFIG, 0);
        properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        properties.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
        return properties;
    }

5、创建消费者公共参数Properties:

定义一个公共的方法,用来封装消费者公共参数,方便以后修改消费者的基本公共参数。

//获取kafka消费者公共配置信息对象
    public static Properties getConsumerCommonProperties(Properties properties){
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
        return properties;
    }

二、Java操作kafka:

1、创建topic:

由基础Properties创建AdminClient对象进行创建topic,在创建topic时指定topic名称、分区数量、副本数量参数:

/**
     * 创建topic
     * @param properties    Properties对象
     * @param topicName topic名称
     * @param partitionNum  topic分片数
     * @param replicationNum    topic副本数
     */
    public static void createTopic(Properties properties,String topicName,Integer partitionNum,Integer replicationNum){
        AdminClient adminClient = AdminClient.create(properties);
        ArrayList<NewTopic> topics = new ArrayList<NewTopic>();
        NewTopic newTopic = new NewTopic(topicName, partitionNum, replicationNum.shortValue());
        topics.add(newTopic);
        CreateTopicsResult result = adminClient.createTopics(topics);
        try {
            result.all().get();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

2、查看所有topic:

由基础Properties创建AdminClient对象进行查看所有topic:

//查看所有topic
    public static void queryAllTopic(Properties properties){
        AdminClient adminClient = AdminClient.create(properties);
        try {
            Collection<TopicListing> topicListing = adminClient.listTopics().listings().get();
            Object[] topicArr = topicListing.toArray();
            for(int i = 0; i < topicArr.length; i++){
                System.out.println(topicArr[i]);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

3、删除topic:

由基础Properties创建AdminClient对象进行删除topic,在删除topic时指定topic名称参数:

//删除topic
    public static void deleteTopic(Properties properties,String topicName){
        AdminClient adminClient = AdminClient.create(properties);
        DeleteTopicsResult result = adminClient.deleteTopics(Arrays.asList(topicName));
        try {
            result.all().get();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

4、发送string类型消息:

由生产者公共Properties对象,再配置生产者个性化的参数,发送消息类型为string的消息。在发送消息时需要指定topic名称、消息内容参数:

/**
     * 创建消息类型为string的生产者producer
     * @param properties    Properties对象
     * @param topicName topic名称
     * @param messageContent    消息内容
     */
    public static void createStringProducer(Properties properties,String topicName,String messageContent){
        properties = getProducerCommonProperties(properties);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        Producer<String, String> producer = new KafkaProducer<String, String>(properties);
        //producer.send(new ProducerRecord<String, Object>(topicName,messageContent));//不回调
        producer.send(new ProducerRecord<String, String>(topicName,messageContent),new Callback() {
                    public void onCompletion(RecordMetadata metadata,Exception e) {
                        if (e !=null) {
                            System.out.println("发送异常");
                        }else {
                            System.out.println("发送正常");
                        }
                    }
                });
        producer.close();
    }

5、消费string类型消息:

由消费者公共Properties对象,再配置消费者个性化的参数,消费消息类型为string的消息。在消费消息时需要指定topic名称、消费组ID参数:

//创建消息类型为string的消费者consumer
    public static void createStringConsumer(Properties properties,String topicName,String consumerGroupId){
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
        properties = getConsumerCommonProperties(properties);
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        final KafkaConsumer<String, String> consumer = new KafkaConsumer<String,String>(properties);
        consumer.subscribe(Arrays.asList(topicName),new ConsumerRebalanceListener() {
            public void onPartitionsRevoked(Collection<TopicPartition> collection) {
            }
            public void onPartitionsAssigned(Collection<TopicPartition> collection) {
                //将偏移设置到最开始
                //consumer.seekToBeginning(collection);
            }
        });
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(100);
            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        }
    }

6、发送object类型消息:

(1)创建实体类:

根据需要创建自己的业务数据对应的实体类,并且实体类必须要实现序列化:

package kafka;

import java.io.Serializable;

/**
 * 测试实体类
 */
public class StudentEntity implements Serializable{

    private String id="";
    private String name="";
    private String code="";

    public String getId() {
        return id;
    }

    public String getName() {
        return name;
    }

    public String getCode() {
        return code;
    }

    public void setId(String id) {
        this.id = id;
    }

    public void setName(String name) {
        this.name = name;
    }

    public void setCode(String code) {
        this.code = code;
    }

    @Override
    public String toString() {
        return "{" +
                "id='" + id + '\'' +
                ", name='" + name + '\'' +
                ", code='" + code + '\'' +
                '}';
    }
}

(2)创建对象序列化为字节数组工具方法:

创建工具方法,将对象序列化为字节数组:

/**
     * 对象序列化为byte数组
     *
     * @param obj
     * @return
     */
    public static byte[] bean2Byte(Object obj) {
        byte[] bb = null;
        try {
            ByteArrayOutputStream byteArray = new ByteArrayOutputStream();
            ObjectOutputStream outputStream = new ObjectOutputStream(byteArray);
            outputStream.writeObject(obj);
            outputStream.flush();
            bb = byteArray.toByteArray();
        } catch (IOException e) {
            e.printStackTrace();
        }
        return bb;
    }

(3)创建对象编码类:

创建一个对象编码的实体类,要实现Serializer<Object>,并在serialize方法中进行重写,调用写好的对象序列化字节数组方法,将object对象序列化为字节数组:

package kafka;

import org.apache.kafka.common.serialization.Serializer;

import java.util.Map;

/**
 * kafka中对象编码
 */
public class EncodeingKafka implements Serializer<Object> {
    @Override
    public void configure(Map configs, boolean isKey) {

    }
    @Override
    public byte[] serialize(String topic, Object data) {
        return BeanUtils.bean2Byte(data);
    }
    /*
     * producer调用close()方法是调用
     */
    @Override
    public void close() {

    }

}

(4)发送object类型消息:

由生产者公共Properties对象,再配置生产者个性化的参数,发送消息类型为object的消息。在发送消息时需要指定topic名称、消息对象、对象编码类的包路径参数:

/**
     *创建消息类型为object的生产者producer
     * @param properties    Properties对象
     * @param topicName topic名称
     * @param messageContent    消息内容
     * @param serializerClassPath   对象object编码类的包路径
     */
    public static void createObjectProducer(Properties properties,String topicName,Object messageContent,String serializerClassPath){
        properties = getProducerCommonProperties(properties);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,serializerClassPath);
        Producer<String, Object> producer = new KafkaProducer<String, Object>(properties);
        //producer.send(new ProducerRecord<String, Object>(topicName,messageContent));//不回调
        producer.send(new ProducerRecord<String, Object>(topicName,messageContent),new Callback() {
            public void onCompletion(RecordMetadata metadata,Exception e) {
                if (e !=null) {
                    System.out.println("发送异常");
                }else {
                    System.out.println("发送正常");
                }
            }
        });
        producer.close();
    }

7、消费object类型消息:

(1)创建字节数组转对象工具方法:

创建工具方法,将字节数组转化为object对象:

/**
     * 字节数组转为Object对象
     *
     * @param bytes
     * @return
     */
    public static Object byte2Obj(byte[] bytes) {
        Object readObject = null;
        try {
            ByteArrayInputStream in = new ByteArrayInputStream(bytes);
            ObjectInputStream inputStream = new ObjectInputStream(in);
            readObject = inputStream.readObject();
        } catch (Exception e) {
            e.printStackTrace();
        }
        return readObject;
    }

(2)创建对象解码类:

创建一个对象解码的实体类,要实现Deserializer<Object>,并在deserialize方法中进行重写,调用写好的字节数组转object对象方法,将字节数组转化为object对象:

package kafka;

import org.apache.kafka.common.serialization.Deserializer;

import java.util.Map;

/**
 * kafka中对象解码
 */
public class DecodeingKafka implements Deserializer<Object> {
    @Override
    public void configure(Map<String, ?> configs, boolean isKey) {
    }

    @Override
    public Object deserialize(String topic, byte[] data) {
        return BeanUtils.byte2Obj(data);
    }

    @Override
    public void close() {

    }
}

(3)消费object类型消息:

由消费者公共Properties对象,再配置消费者个性化的参数,消费消息类型为object的消息。在消费消息时需要指定topic名称、消费组ID、对象解码类的包路径参数:在接收到的每一条消息都可以直接将消息强转为对应的实体类型的对象。

//创建消息类型为object的消费者consumer
    public static void createObjectConsumer(Properties properties,String topicName,String consumerGroupId,String deserializerClassPath){
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
        properties = getConsumerCommonProperties(properties);
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,deserializerClassPath);
        final KafkaConsumer<String, Object> consumer = new KafkaConsumer<String,Object>(properties);
        consumer.subscribe(Arrays.asList(topicName),new ConsumerRebalanceListener() {
            public void onPartitionsRevoked(Collection<TopicPartition> collection) {
            }
            public void onPartitionsAssigned(Collection<TopicPartition> collection) {
                //consumer.seekToBeginning(collection);//将偏移设置到最开始
            }
        });
        while (true) {
            ConsumerRecords<String, Object> records = consumer.poll(100);
            for (ConsumerRecord<String, Object> record : records)
                if(record.value()!=null){
                    StudentEntity studentEntity = (StudentEntity) record.value();
                    System.out.println(studentEntity);
                }
        }
    }

java创建kafka客户端,操作kafka核心方法代码类如下:

package kafka;

import org.apache.kafka.clients.admin.*;
import org.apache.kafka.clients.consumer.*;
import org.apache.kafka.clients.producer.*;
import org.apache.kafka.common.TopicPartition;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Properties;

/**
 * kafka客户端
 */
public class KafkaClient {

    public static void main(String[] args){
        Properties properties = getKafkaProperties();
        String topicName = "java";
        Integer partitionNum = 1;
        Integer replicationNum = 1;
        //createTopic(properties,topicName,partitionNum,replicationNum);
        //deleteTopic(properties,topicName);
        //queryAllTopic(properties);
    }

    //获取kafka客户端Properties对象
    public static Properties getKafkaProperties(){
        Properties props = new Properties();
        //单机kafka服务端配置方式
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.141:9091");
        //集群kafka服务端配置方式
        //props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.141:9091,192.168.0.142:9092,192.168.0.143:9093");
        return props;
    }

    //获取kafka生产者公共配置信息对象
    public static Properties getProducerCommonProperties(Properties properties){
        properties.put(ProducerConfig.ACKS_CONFIG, "all");
        properties.put(ProducerConfig.RETRIES_CONFIG, 0);
        properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
        properties.put(ProducerConfig.LINGER_MS_CONFIG, 1);
        properties.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
        return properties;
    }

    //获取kafka消费者公共配置信息对象
    public static Properties getConsumerCommonProperties(Properties properties){
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
        properties.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
        return properties;
    }

    /**
     * 创建topic
     * @param properties    Properties对象
     * @param topicName topic名称
     * @param partitionNum  topic分片数
     * @param replicationNum    topic副本数
     */
    public static void createTopic(Properties properties,String topicName,Integer partitionNum,Integer replicationNum){
        AdminClient adminClient = AdminClient.create(properties);
        ArrayList<NewTopic> topics = new ArrayList<NewTopic>();
        NewTopic newTopic = new NewTopic(topicName, partitionNum, replicationNum.shortValue());
        topics.add(newTopic);
        CreateTopicsResult result = adminClient.createTopics(topics);
        try {
            result.all().get();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    //查看所有topic
    public static void queryAllTopic(Properties properties){
        AdminClient adminClient = AdminClient.create(properties);
        try {
            Collection<TopicListing> topicListing = adminClient.listTopics().listings().get();
            Object[] topicArr = topicListing.toArray();
            for(int i = 0; i < topicArr.length; i++){
                System.out.println(topicArr[i]);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    //删除topic
    public static void deleteTopic(Properties properties,String topicName){
        AdminClient adminClient = AdminClient.create(properties);
        DeleteTopicsResult result = adminClient.deleteTopics(Arrays.asList(topicName));
        try {
            result.all().get();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    /**
     * 创建消息类型为string的生产者producer
     * @param properties    Properties对象
     * @param topicName topic名称
     * @param messageContent    消息内容
     */
    public static void createStringProducer(Properties properties,String topicName,String messageContent){
        properties = getProducerCommonProperties(properties);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
        Producer<String, String> producer = new KafkaProducer<String, String>(properties);
        //producer.send(new ProducerRecord<String, Object>(topicName,messageContent));//不回调
        producer.send(new ProducerRecord<String, String>(topicName,messageContent),new Callback() {
                    public void onCompletion(RecordMetadata metadata,Exception e) {
                        if (e !=null) {
                            System.out.println("发送异常");
                        }else {
                            System.out.println("发送正常");
                        }
                    }
                });
        producer.close();
    }

    /**
     *创建消息类型为object的生产者producer
     * @param properties    Properties对象
     * @param topicName topic名称
     * @param messageContent    消息内容
     * @param serializerClassPath   对象object编码类的包路径
     */
    public static void createObjectProducer(Properties properties,String topicName,Object messageContent,String serializerClassPath){
        properties = getProducerCommonProperties(properties);
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,serializerClassPath);
        Producer<String, Object> producer = new KafkaProducer<String, Object>(properties);
        //producer.send(new ProducerRecord<String, Object>(topicName,messageContent));//不回调
        producer.send(new ProducerRecord<String, Object>(topicName,messageContent),new Callback() {
            public void onCompletion(RecordMetadata metadata,Exception e) {
                if (e !=null) {
                    System.out.println("发送异常");
                }else {
                    System.out.println("发送正常");
                }
            }
        });
        producer.close();
    }

    //创建消息类型为string的消费者consumer
    public static void createStringConsumer(Properties properties,String topicName,String consumerGroupId){
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
        properties = getConsumerCommonProperties(properties);
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        final KafkaConsumer<String, String> consumer = new KafkaConsumer<String,String>(properties);
        consumer.subscribe(Arrays.asList(topicName),new ConsumerRebalanceListener() {
            public void onPartitionsRevoked(Collection<TopicPartition> collection) {
            }
            public void onPartitionsAssigned(Collection<TopicPartition> collection) {
                //将偏移设置到最开始
                //consumer.seekToBeginning(collection);
            }
        });
        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(100);
            for (ConsumerRecord<String, String> record : records)
                System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
        }
    }

    //创建消息类型为object的消费者consumer
    public static void createObjectConsumer(Properties properties,String topicName,String consumerGroupId,String deserializerClassPath){
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
        properties = getConsumerCommonProperties(properties);
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,deserializerClassPath);
        final KafkaConsumer<String, Object> consumer = new KafkaConsumer<String,Object>(properties);
        consumer.subscribe(Arrays.asList(topicName),new ConsumerRebalanceListener() {
            public void onPartitionsRevoked(Collection<TopicPartition> collection) {
            }
            public void onPartitionsAssigned(Collection<TopicPartition> collection) {
                //consumer.seekToBeginning(collection);//将偏移设置到最开始
            }
        });
        while (true) {
            ConsumerRecords<String, Object> records = consumer.poll(100);
            for (ConsumerRecord<String, Object> record : records)
                if(record.value()!=null){
                    StudentEntity studentEntity = (StudentEntity) record.value();
                    System.out.println(studentEntity);
                }
        }
    }




}

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值