{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "Tce3stUlHN0L" }, "source": [ "##### Copyright 2020 The TensorFlow IO Authors." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "id": "tuOe1ymfHZPu" }, "outputs": [], "source": [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n", "# you may not use this file except in compliance with the License.\n", "# You may obtain a copy of the License at\n", "#\n", "# https://www.apache.org/licenses/LICENSE-2.0\n", "#\n", "# Unless required by applicable law or agreed to in writing, software\n", "# distributed under the License is distributed on an \"AS IS\" BASIS,\n", "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", "# See the License for the specific language governing permissions and\n", "# limitations under the License." ] }, { "cell_type": "markdown", "metadata": { "id": "qFdPvlXBOdUN" }, "source": [ "# Robust machine learning on streaming data using Kafka and Tensorflow-IO" ] }, { "cell_type": "markdown", "metadata": { "id": "MfBg1C5NB3X0" }, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " View on TensorFlow.org\n", " \n", " Run in Google Colab\n", " \n", " View source on GitHub\n", " \n", " Download notebook\n", "
" ] }, { "cell_type": "markdown", "metadata": { "id": "xHxb-dlhMIzW" }, "source": [ "## Overview\n", "\n", "This tutorial focuses on streaming data from a [Kafka](https://kafka.apache.org/quickstart) cluster into a `tf.data.Dataset` which is then used in conjunction with `tf.keras` for training and inference.\n", "\n", "Kafka is primarily a distributed event-streaming platform which provides scalable and fault-tolerant streaming data across data pipelines. It is an essential technical component of a plethora of major enterprises where mission-critical data delivery is a primary requirement.\n", "\n", "**NOTE:** A basic understanding of the [kafka components](https://kafka.apache.org/documentation/#intro_concepts_and_terms) will help you in following the tutorial with ease.\n", "\n", "**NOTE:** A Java runtime environment is required to run this tutorial." ] }, { "cell_type": "markdown", "metadata": { "id": "MUXex9ctTuDB" }, "source": [ "## Setup" ] }, { "cell_type": "markdown", "metadata": { "id": "upgCc3gXybsA" }, "source": [ "### Install the required tensorflow-io and kafka packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "48B9eAMMhAgw" }, "outputs": [], "source": [ "!pip install tensorflow-io\n", "!pip install kafka-python" ] }, { "cell_type": "markdown", "metadata": { "id": "gjrZNJQRJP-U" }, "source": [ "### Import packages" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "m6KXZuTBWgRm" }, "outputs": [], "source": [ "import os\n", "from datetime import datetime\n", "import time\n", "import threading\n", "import json\n", "from kafka import KafkaProducer\n", "from kafka.errors import KafkaError\n", "from sklearn.model_selection import train_test_split\n", "import pandas as pd\n", "import tensorflow as tf\n", "import tensorflow_io as tfio" ] }, { "cell_type": "markdown", "metadata": { "id": "eCgO11GTJaTj" }, "source": [ "### Validate tf and tfio imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "dX74RKfZ_TdF" }, "outputs": [], "source": [ "print(\"tensorflow-io version: {}\".format(tfio.__version__))\n", "print(\"tensorflow version: {}\".format(tf.__version__))" ] }, { "cell_type": "markdown", "metadata": { "id": "yZmI7l_GykcW" }, "source": [ "## Download and setup Kafka and Zookeeper instances\n", "\n", "For demo purposes, the following instances are setup locally:\n", "\n", "- Kafka (Brokers: 127.0.0.1:9092)\n", "- Zookeeper (Node: 127.0.0.1:2181)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YUj0878jPyz7" }, "outputs": [], "source": [ "!curl -sSOL https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz\n", "!tar -xzf kafka_2.13-3.1.0.tgz" ] }, { "cell_type": "markdown", "metadata": { "id": "vAzfu_WiEs4F" }, "source": [ "Using the default configurations (provided by Apache Kafka) for spinning up the instances." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "n9ujlunrWgRx" }, "outputs": [], "source": [ "!./kafka_2.13-3.1.0/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.1.0/config/zookeeper.properties\n", "!./kafka_2.13-3.1.0/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.1.0/config/server.properties\n", "!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n", "!sleep 10" ] }, { "cell_type": "markdown", "metadata": { "id": "f6qxCdypE1DD" }, "source": [ "Once the instances are started as daemon processes, grep for `kafka` in the processes list. The two java processes correspond to zookeeper and the kafka instances." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "48LqMJ1BEHm5" }, "outputs": [], "source": [ "!ps -ef | grep kafka" ] }, { "cell_type": "markdown", "metadata": { "id": "Z3TntBqanQnh" }, "source": [ "Create the kafka topics with the following specs:\n", "\n", "- susy-train: partitions=1, replication-factor=1 \n", "- susy-test: partitions=2, replication-factor=1 " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "lXJWqMmWnPyP" }, "outputs": [], "source": [ "!./kafka_2.13-3.1.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 1 --topic susy-train\n", "!./kafka_2.13-3.1.0/bin/kafka-topics.sh --create --bootstrap-server 127.0.0.1:9092 --replication-factor 1 --partitions 2 --topic susy-test\n" ] }, { "cell_type": "markdown", "metadata": { "id": "kNxf_NqjnycC" }, "source": [ "Describe the topic for details on the configuration" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "apCf9pfVnwn7" }, "outputs": [], "source": [ "!./kafka_2.13-3.1.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic susy-train\n", "!./kafka_2.13-3.1.0/bin/kafka-topics.sh --describe --bootstrap-server 127.0.0.1:9092 --topic susy-test\n" ] }, { "cell_type": "markdown", "metadata": { "id": "jKVnz3Pjot9t" }, "source": [ "The replication factor 1 indicates that the data is not being replicated. This is due to the presence of a single broker in our kafka setup.\n", "In production systems, the number of bootstrap servers can be in the range of 100's of nodes. That is where the fault-tolerance using replication comes into picture.\n", "\n", "Please refer to the [docs](https://kafka.apache.org/documentation/#replication) for more details.\n" ] }, { "cell_type": "markdown", "metadata": { "id": "bjCy3zaCQJ7-" }, "source": [ "## SUSY Dataset\n", "\n", "Kafka being an event streaming platform, enables data from various sources to be written into it. For instance:\n", "\n", "- Web traffic logs\n", "- Astronomical measurements\n", "- IoT sensor data\n", "- Product reviews and many more.\n", "\n", "For the purpose of this tutorial, lets download the [SUSY](https://archive.ics.uci.edu/ml/datasets/SUSY#) dataset and feed the data into kafka manually. The goal of this classification problem is to distinguish between a signal process which produces supersymmetric particles and a background process which does not.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "emslB2EGQMCR" }, "outputs": [], "source": [ "!curl -sSOL https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz" ] }, { "cell_type": "markdown", "metadata": { "id": "4CfKVmCvwcL7" }, "source": [ "### Explore the dataset" ] }, { "cell_type": "markdown", "metadata": { "id": "18aR_MsOKToc" }, "source": [ "The first column is the class label (1 for signal, 0 for background), followed by the 18 features (8 low-level features then 10 high-level features).\n", "The first 8 features are kinematic properties measured by the particle detectors in the accelerator. The last 10 features are functions of the first 8 features. These are high-level features derived by physicists to help discriminate between the two classes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "XkXyocIdKRSB" }, "outputs": [], "source": [ "COLUMNS = [\n", " # labels\n", " 'class',\n", " # low-level features\n", " 'lepton_1_pT',\n", " 'lepton_1_eta',\n", " 'lepton_1_phi',\n", " 'lepton_2_pT',\n", " 'lepton_2_eta',\n", " 'lepton_2_phi',\n", " 'missing_energy_magnitude',\n", " 'missing_energy_phi',\n", " # high-level derived features\n", " 'MET_rel',\n", " 'axial_MET',\n", " 'M_R',\n", " 'M_TR_2',\n", " 'R',\n", " 'MT2',\n", " 'S_R',\n", " 'M_Delta_R',\n", " 'dPhi_r_b',\n", " 'cos(theta_r1)'\n", " ]" ] }, { "cell_type": "markdown", "metadata": { "id": "q0NBA51_1Ie2" }, "source": [ "The entire dataset consists of 5 million rows. However, for the purpose of this tutorial, let's consider only a fraction of the dataset (100,000 rows) so that less time is spent on the moving the data and more time on understanding the functionality of the api." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "nC-yt_c9u0sH" }, "outputs": [], "source": [ "susy_iterator = pd.read_csv('SUSY.csv.gz', header=None, names=COLUMNS, chunksize=100000)\n", "susy_df = next(susy_iterator)\n", "susy_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "AlNuW7xbu6o8" }, "outputs": [], "source": [ "# Number of datapoints and columns\n", "len(susy_df), len(susy_df.columns)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "c6Cg22bU0-na" }, "outputs": [], "source": [ "# Number of datapoints belonging to each class (0: background noise, 1: signal)\n", "len(susy_df[susy_df[\"class\"]==0]), len(susy_df[susy_df[\"class\"]==1])" ] }, { "cell_type": "markdown", "metadata": { "id": "tF5K9xtmlT2P" }, "source": [ "### Split the dataset\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "n-ku_X0Wld59" }, "outputs": [], "source": [ "train_df, test_df = train_test_split(susy_df, test_size=0.4, shuffle=True)\n", "print(\"Number of training samples: \",len(train_df))\n", "print(\"Number of testing sample: \",len(test_df))\n", "\n", "x_train_df = train_df.drop([\"class\"], axis=1)\n", "y_train_df = train_df[\"class\"]\n", "\n", "x_test_df = test_df.drop([\"class\"], axis=1)\n", "y_test_df = test_df[\"class\"]\n", "\n", "# The labels are set as the kafka message keys so as to store data\n", "# in multiple-partitions. Thus, enabling efficient data retrieval\n", "# using the consumer groups.\n", "x_train = list(filter(None, x_train_df.to_csv(index=False).split(\"\\n\")[1:]))\n", "y_train = list(filter(None, y_train_df.to_csv(index=False).split(\"\\n\")[1:]))\n", "\n", "x_test = list(filter(None, x_test_df.to_csv(index=False).split(\"\\n\")[1:]))\n", "y_test = list(filter(None, y_test_df.to_csv(index=False).split(\"\\n\")[1:]))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YHXk0x2MXVgL" }, "outputs": [], "source": [ "NUM_COLUMNS = len(x_train_df.columns)\n", "len(x_train), len(y_train), len(x_test), len(y_test)" ] }, { "cell_type": "markdown", "metadata": { "id": "wwP5U4GqmhoL" }, "source": [ "### Store the train and test data in kafka\n", "\n", "Storing the data in kafka simulates an environment for continuous remote data retrieval for training and inference purposes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YhwFImSqncLE" }, "outputs": [], "source": [ "def error_callback(exc):\n", " raise Exception('Error while sendig data to kafka: {0}'.format(str(exc)))\n", "\n", "def write_to_kafka(topic_name, items):\n", " count=0\n", " producer = KafkaProducer(bootstrap_servers=['127.0.0.1:9092'])\n", " for message, key in items:\n", " producer.send(topic_name, key=key.encode('utf-8'), value=message.encode('utf-8')).add_errback(error_callback)\n", " count+=1\n", " producer.flush()\n", " print(\"Wrote {0} messages into topic: {1}\".format(count, topic_name))\n", "\n", "write_to_kafka(\"susy-train\", zip(x_train, y_train))\n", "write_to_kafka(\"susy-test\", zip(x_test, y_test))\n" ] }, { "cell_type": "markdown", "metadata": { "id": "58q52py93jEf" }, "source": [ "### Define the tfio train dataset\n", "\n", "The `IODataset` class is utilized for streaming data from kafka into tensorflow. The class inherits from `tf.data.Dataset` and thus has all the useful functionalities of `tf.data.Dataset` out of the box.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "HHOcitbW2_d1" }, "outputs": [], "source": [ "def decode_kafka_item(item):\n", " message = tf.io.decode_csv(item.message, [[0.0] for i in range(NUM_COLUMNS)])\n", " key = tf.strings.to_number(item.key)\n", " return (message, key)\n", "\n", "BATCH_SIZE=64\n", "SHUFFLE_BUFFER_SIZE=64\n", "train_ds = tfio.IODataset.from_kafka('susy-train', partition=0, offset=0)\n", "train_ds = train_ds.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)\n", "train_ds = train_ds.map(decode_kafka_item)\n", "train_ds = train_ds.batch(BATCH_SIZE)" ] }, { "cell_type": "markdown", "metadata": { "id": "x84lZJY164RI" }, "source": [ "## Build and train the model\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "uuHtpAMqLqmv" }, "outputs": [], "source": [ "# Set the parameters\n", "\n", "OPTIMIZER=\"adam\"\n", "LOSS=tf.keras.losses.BinaryCrossentropy(from_logits=True)\n", "METRICS=['accuracy']\n", "EPOCHS=10\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7lBmxxuj63jZ" }, "outputs": [], "source": [ "# design/build the model\n", "model = tf.keras.Sequential([\n", " tf.keras.layers.Input(shape=(NUM_COLUMNS,)),\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dropout(0.2),\n", " tf.keras.layers.Dense(256, activation='relu'),\n", " tf.keras.layers.Dropout(0.4),\n", " tf.keras.layers.Dense(128, activation='relu'),\n", " tf.keras.layers.Dropout(0.4),\n", " tf.keras.layers.Dense(1, activation='sigmoid')\n", "])\n", "\n", "print(model.summary())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LTDFVxpSLfXI" }, "outputs": [], "source": [ "# compile the model\n", "model.compile(optimizer=OPTIMIZER, loss=LOSS, metrics=METRICS)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SIJMg-saLgeR" }, "outputs": [], "source": [ "# fit the model\n", "model.fit(train_ds, epochs=EPOCHS)" ] }, { "cell_type": "markdown", "metadata": { "id": "ZPy0Ka21QII5" }, "source": [ "Note: Please do not confuse the training step with online training. It's an entirely different paradigm which will be covered in a later section.\n", "\n", "Since only a fraction of the dataset is being utilized, our accuracy is limited to ~78% during the training phase. However, please feel free to store additional data in kafka for a better model performance. Also, since the goal was to just demonstrate the functionality of the tfio kafka datasets, a smaller and less-complicated neural network was used. However, one can increase the complexity of the model, modify the learning strategy, tune hyper-parameters etc for exploration purposes. For a baseline approach, please refer to this [article](https://www.nature.com/articles/ncomms5308#Sec11)." ] }, { "cell_type": "markdown", "metadata": { "id": "XYJW8za2qm4c" }, "source": [ "## Infer on the test data\n", "\n", "To infer on the test data by adhering to the 'exactly-once' semantics along with fault-tolerance, the `streaming.KafkaGroupIODataset` can be utilized. \n" ] }, { "cell_type": "markdown", "metadata": { "id": "w3FZOlSh2pmy" }, "source": [ "### Define the tfio test dataset\n", "\n", "The `stream_timeout` parameter blocks for the given duration for new data points to be streamed into the topic. This removes the need for creating new datasets if the data is being streamed into the topic in an intermittent fashion." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "wjnM81lPROen" }, "outputs": [], "source": [ "test_ds = tfio.experimental.streaming.KafkaGroupIODataset(\n", " topics=[\"susy-test\"],\n", " group_id=\"testcg\",\n", " servers=\"127.0.0.1:9092\",\n", " stream_timeout=10000,\n", " configuration=[\n", " \"session.timeout.ms=7000\",\n", " \"max.poll.interval.ms=8000\",\n", " \"auto.offset.reset=earliest\"\n", " ],\n", ")\n", "\n", "def decode_kafka_test_item(raw_message, raw_key):\n", " message = tf.io.decode_csv(raw_message, [[0.0] for i in range(NUM_COLUMNS)])\n", " key = tf.strings.to_number(raw_key)\n", " return (message, key)\n", "\n", "test_ds = test_ds.map(decode_kafka_test_item)\n", "test_ds = test_ds.batch(BATCH_SIZE)" ] }, { "cell_type": "markdown", "metadata": { "id": "cg8j3bZsSF6u" }, "source": [ "Though this class can be used for training purposes, there are caveats which need to be addressed. Once all the messages are read from kafka and the latest offsets are committed using the `streaming.KafkaGroupIODataset`, the consumer doesn't restart reading the messages from the beginning. Thus, while training, it is possible only to train for a single epoch with the data continuously flowing in. This kind of a functionality has limited use cases during the training phase wherein, once a datapoint has been consumed by the model it is no longer required and can be discarded.\n", "\n", "However, this functionality shines when it comes to robust inference with exactly-once semantics." ] }, { "cell_type": "markdown", "metadata": { "id": "2PapN5Q_241k" }, "source": [ "### evaluate the performance on the test data\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6hMtIe1X215P" }, "outputs": [], "source": [ "res = model.evaluate(test_ds)\n", "print(\"test loss, test acc:\", res)\n" ] }, { "cell_type": "markdown", "metadata": { "id": "mWX9j11bWJGe" }, "source": [ "Since the inference is based on 'exactly-once' semantics, the evaluation on the test set can be run only once. In order to run the inference again on the test data, a new consumer group should be used." ] }, { "cell_type": "markdown", "metadata": { "id": "95Chcbd9xThl" }, "source": [ "### Track the offset lag of the `testcg` consumer group" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9uz3km0RxUG7" }, "outputs": [], "source": [ "!./kafka_2.13-3.1.0/bin/kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --describe --group testcg\n" ] }, { "cell_type": "markdown", "metadata": { "id": "I8Wg0_eXMKL9" }, "source": [ "Once the `current-offset` matches the `log-end-offset` for all the partitions, it indicates that the consumer(s) have completed fetching all the messages from the kafka topic." ] }, { "cell_type": "markdown", "metadata": { "id": "TYwillcxP97z" }, "source": [ "## Online learning\n", "\n", "The online machine learning paradigm is a bit different from the traditional/conventional way of training machine learning models. In the former case, the model continues to incrementally learn/update it's parameters as soon as the new data points are available and this process is expected to continue indefinitely. This is unlike the latter approaches where the dataset is fixed and the model iterates over it `n` number of times. In online learning, the data once consumed by the model may not be available for training again.\n", "\n", "By utilizing the `streaming.KafkaBatchIODataset`, it is now possible to train the models in this fashion. Let's continue to use our SUSY dataset for demonstrating this functionality." ] }, { "cell_type": "markdown", "metadata": { "id": "r5HyQtUZXi_P" }, "source": [ "### The tfio training dataset for online learning\n", "\n", "The `streaming.KafkaBatchIODataset` is similar to the `streaming.KafkaGroupIODataset` in it's API. Additionally, it is recommended to utilize the `stream_timeout` parameter to configure the duration for which the dataset will block for new messages before timing out. In the instance below, the dataset is configured with a `stream_timeout` of `10000` milliseconds. This implies that, after all the messages from the topic have been consumed, the dataset will wait for an additional 10 seconds before timing out and disconnecting from the kafka cluster. If new messages are streamed into the topic before timing out, the data consumption and model training resumes for those newly consumed data points. To block indefinitely, set it to `-1`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "m-zCHNOuSJDL" }, "outputs": [], "source": [ "online_train_ds = tfio.experimental.streaming.KafkaBatchIODataset(\n", " topics=[\"susy-train\"],\n", " group_id=\"cgonline\",\n", " servers=\"127.0.0.1:9092\",\n", " stream_timeout=10000, # in milliseconds, to block indefinitely, set it to -1.\n", " configuration=[\n", " \"session.timeout.ms=7000\",\n", " \"max.poll.interval.ms=8000\",\n", " \"auto.offset.reset=earliest\"\n", " ],\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "lgSCn5dskO0t" }, "source": [ "Every item that the `online_train_ds` generates is a `tf.data.Dataset` in itself. Thus, all the standard transformations can be applied as usual.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "9cxF0bgGkQJs" }, "outputs": [], "source": [ "def decode_kafka_online_item(raw_message, raw_key):\n", " message = tf.io.decode_csv(raw_message, [[0.0] for i in range(NUM_COLUMNS)])\n", " key = tf.strings.to_number(raw_key)\n", " return (message, key)\n", " \n", "for mini_ds in online_train_ds:\n", " mini_ds = mini_ds.shuffle(buffer_size=32)\n", " mini_ds = mini_ds.map(decode_kafka_online_item)\n", " mini_ds = mini_ds.batch(32)\n", " if len(mini_ds) > 0:\n", " model.fit(mini_ds, epochs=3)" ] }, { "cell_type": "markdown", "metadata": { "id": "IGph8eP9isuW" }, "source": [ "The incrementally trained model can be saved in a periodic fashion (based on use-cases) and can be utilized to infer on the test data in either online or offline modes.\n", "\n", "Note: The `streaming.KafkaBatchIODataset` and `streaming.KafkaGroupIODataset` are still in experimental phase and have scope for improvements based on user-feedback." ] }, { "cell_type": "markdown", "metadata": { "id": "P8QAS_3k1y3u" }, "source": [ "## References:\n", "\n", "- Baldi, P., P. Sadowski, and D. Whiteson. “Searching for Exotic Particles in High-energy Physics with Deep Learning.” Nature Communications 5 (July 2, 2014)\n", "\n", "- SUSY Dataset: https://archive.ics.uci.edu/ml/datasets/SUSY#\n" ] } ], "metadata": { "accelerator": "GPU", "colab": { "collapsed_sections": [], "name": "kafka.ipynb", "toc_visible": true }, "kernelspec": { "display_name": "Python 3", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 0 }