🚀
Introducing Versions: Develop data products using Git. Join the waitlist
Icon Apache Kafka

Confluent

Publish low-latency APIs on Confluent Streams in minutes with Tinybird

Instead of building a new consumer every time you want to make sense of your Confluent streams, write SQL queries and expose them as API endpoints. Easy to maintain. Always up-to-date. Fast as can be.

No credit card needed

Trusted in production by engineers at...

The Hotels NetworkThe Hotels Network
Feedback Loop
Stay
Plytix
Audiense
Situm
Genially

Easy integration

Connect to your Confluent cluster in seconds. Choose your topics, define your schema, and ingest millions of events per second into a fully-managed OLAP.

Easy integration

It's just SQL

Using nothing but SQL, query your Confluent data and enrich it with dimensions from your database, warehouse, or files.

SQL based

Publish APIs in a click

Instantly publish your SQL queries as parameterized data APIs, automatically documented and scaled.

TTLs and Roll-ups

Secure

Use Auth tokens to control access to API endpoints. Implement access policies as you need. Support for row-level security.

Secure

Turn Data Streams into answers in minutes with SQL.

Every new use case over your Confluent Data Streams is just one SQL query away. Store the raw data or materialize roll-ups in realtime at any scale. Enrich with SQL JOINs. We will worry about performance so you can focus on enabling your teams.

$ tb connection create confluent --bootstrap-servers pkc-a1234.europe-west2.gcp.confluent.cloud:9092 --key CK2AS3 --secret "19EfGz34t"
Connection name (optional, current: pkc-a1234.europe-west2.gcp.confluent.cloud:9092) [pkc-a1234.europe-west2.gcp.confluent.cloud:9092]:
** Connection 34250dcb-4e51-4d9b-9481-8db673c6a590 created successfully!

$ tb datasource connect 34250dcb-4e51-4d9b-9481-8db673c6a590 sales
We've discovered the following topics:
   sales_prod
   sales_staging
Confluent topic:
sales_prod
Confluent group: tb-prod
Proceed? [y/N]:
y
** Data Source 't_07047b1547c64d5a882a97c2885f761e' created
** Confluent streaming connection configured successfully!

NODE avg_triptime_endpoint
SQL >
  SELECT
    toDayOfMonth(pickup_datetime) as day,
    avg (dateDiff('minute', pickup_datetime, dropoff_datetime)) as avg_trip_time_minutes
  FROM tripdata
    {% if defined(start_date) and defined(end_date) %}
WHERE pickup_dt BETWEEN {{Date(start_date)}} AND {{Date(end_date)}}
    {% end %}
  GROUP BY day

$ tb push endpoints/avg_triptime.pipe 
** Processing avg_triptime.pipe 
** Building dependencies 
** Creating avg_triptime 
** Token read API token not found, creating one 
=> Test endpoint with: 
$ curl https://api.tinybird.co/v0/pipes/avg_triptime.json?token=<TOKEN>&start_date=2021-01-01&end_date=2021-03-01
** 'avg_triptime' created

1

One topic, one data source

Tinybird consumes your topics in realtime into Data Sources that can be queried individually via SQL.

2

Enrich and Transform your Data Streams

As data comes in, you can enrich it with additional business relevant data via our Data Pipes and prepare it for consumption.

3

Publish API endpoints

Share access securely to your data with just one click and get full OpenAPI and Postman documentation for your APIs.

Enrich your Confluent Streams with data from other Sources

Tinybird lets you enrich Confluent streams with data from databases, data warehouses, and more.

Amazon Redshift

Amazon S3

Google BigQuery

Apache Kafka

PostgreSQL

MySQL

Snowflake