🚀
Introducing Versions: Develop data products using Git. Join the waitlist
Icon Apache Kafka

Apache Kafka

Turn your Kafka topics into actionable API Endpoints your teams can consume

Instead of building a new consumer every time you want to make sense of your Data Streams, write SQL queries and expose them as API endpoints. Easy to maintain. Always up-to-date. Fast as can be.

No credit card needed
Apache KafkaApache Kafka

Trusted in production by engineers at...

The Hotels NetworkThe Hotels Network
Feedback Loop
Stay
Plytix
Audiense
Situm
Genially

Easy integration

Connect to Kafka and start building APIs right away. Choose a topic, what fields you are interested in and ingest millions of rows per second.

Easy integration

SQL based

Transform or enrich your Kafka topics with JOINs using our serverless Data Pipes.

SQL based

Automatic APIs

All your Data APIs in one place, automatically documented and scaled. Consistent results for your Data/Dev Teams.

TTLs and Roll-ups

Secure

Use Auth tokens to control access to API endpoints. Implement access policies as you need. Support for row-level security.

Secure

Turn Data Streams into answers in minutes with SQL.

Every new use case over your Kafka Data Streams is just one SQL query away. Store the raw data or materialize roll-ups in realtime at any scale. Enrich with SQL JOINs. We will worry about performance so you can focus on enabling your teams.

$ tb connection create kafka --bootstrap-servers pkc-a1234.europe-west2.gcp.confluent.cloud:9092 --key CK2AS3 --secret "19EfGz34t"
Connection name (optional, current: pkc-a1234.europe-west2.gcp.confluent.cloud:9092) [pkc-a1234.europe-west2.gcp.confluent.cloud:9092]:
** Connection 34250dcb-4e51-4d9b-9481-8db673c6a590 created successfully!

$ tb datasource connect 34250dcb-4e51-4d9b-9481-8db673c6a590 sales
We've discovered the following topics:
   sales_prod
   sales_staging
Kafka topic:
sales_prod
Kafka group: tb-prod
Kafka doesn't seem to have prior commits on this topic and group ID
Setting auto.offset.reset is required. Valid values:
 latest          Skip earlier messages and ingest only new messages
 earliest        Start ingestion from the first message
Kafka auto.offset.reset config: latest
Proceed? [y/N]:
y
** Data Source 't_07047b1547c64d5a882a97c2885f761e' created
** Kafka streaming connection configured successfully!

NODE avg_triptime_endpoint
SQL >
  SELECT
    toDayOfMonth(pickup_datetime) as day,
    avg (dateDiff('minute', pickup_datetime, dropoff_datetime)) as avg_trip_time_minutes
  FROM tripdata
    {% if defined(start_date) and defined(end_date) %}
WHERE pickup_dt BETWEEN {{Date(start_date)}} AND {{Date(end_date)}}
    {% end %}
  GROUP BY day

$ tb push endpoints/avg_triptime.pipe 
** Processing avg_triptime.pipe 
** Building dependencies 
** Creating avg_triptime 
** Token read API token not found, creating one 
=> Test endpoint with: 
$ curl https://api.tinybird.co/v0/pipes/avg_triptime.json?token=<TOKEN>&start_date=2021-01-01&end_date=2021-03-01
** 'avg_triptime' created

1

One topic, one data source

Tinybird consumes your topics in realtime into Data Sources that can be queried individually via SQL.

2

Enrich and Transform your Data Streams

As data comes in, you can enrich it with additional business relevant data via our Data Pipes and prepare it for consumption.

3

Publish API endpoints

Share access securely to your data with just one click and get full OpenAPI and Postman documentation for your APIs.

We accelerate your data, no matter where it is.

Connect data from Relational Databases, Data Warehouses and Data Streams.

Amazon Redshift

Amazon S3

Google BigQuery

Apache Kafka

PostgreSQL

MySQL

Snowflake