Learn how to scale a Kafka topic to support analytics for thousands of concurrent users
Streaming data has become the backbone of the modern data enterprise. Those who can wield streaming data for value creation will have more opportunities to lead and drive the data industry forward.
Data engineers and developers familiar with Apache Kafka can benefit from training that shows them how to leverage that data for further value creation within the business.
In this 3-hour in-person training, you'll gain invaluable skills that extend your Apache Kafka repertoire, giving you the ability to leverage streaming data to build scalable user-facing analytics.
Learn how to ingest Apache Kafka topics into a real-time data platform and enrich it with dimensional data from data warehouses like Snowflake, object storage like Amazon S3, and other batch data sources.
Learn how to craft scalable and secure user-facing analytics APIs. Use powerful templating language to integrate query parameters into your APIs, and implement row-level security policies for multi-tenant architectures.
Learn how to use Tinybird to build real-time data pipelines using nothing but SQL. Learn techniques to optimize your SQL for real-time processing using strategies like deduplication, rollups, and materialized views.
Learn how to scale, maintain, and iterate your data products to support real-time application development. Integrate your data pipelines with Git to facilitate easier changes and faster iterations.