Kafka Meetup

15 februari 2018

Plats:

Holländargatan 13, Stockholm

17:30

Sista anmälningsdag:

Anmälan stängd

Hur kan tekniken från Kafka användas för att förhindra penningtvätt och bedrägerier? Och hur kan du bygga streaming data pipelines med Kafka - utan att skriva en enda rad med kod? Anmäl dig till vår Kafka meetup 15 februari för att höra mer om detta!

Anmäl dig här

17:30-18:00 Registration + dinner

18:00-18:05 Intro

18:05-18:30 Rethink Data in the Microservices Chaos - Morvarid Aprin (Forefront)

18:30-19:10 AML and fraud detection in banking, based on event-driven microservices and stateful streams - Andreas Lundsten (Forefront)

19:10-19:20 Break

19:20-20:05 Look Ma, no Code! Building Streaming Data Pipelines with Apache Kafka and KSQL - Robin Moffat (Confluent)

20:05-20:30 Mingle

Rethink Data in the Microservices Chaos

There are known benefits with moving to a microservices architecture. However, to overcome the complexities request-driven architecture can bring, we need to rethink how we define and treat our data. We will begin this meetup with a short talk about this core redefinition of data as streams and how Kafka fits into the picture, before moving on to the next presentations.

AML and fraud detection in banking, based on event-driven microservices and stateful streams

How can Kafka’s streams and state-tables be leveraged to implement a fast and secure AML (Anti-Money Laundering) and Fraud Detection Platform? The progress of moving away from a monolith to micro services technical environment is a prioritized task for businesses today. To reduce risk and improve performance, more lightweight modules with distinct responsibilities are preferred. This presentation will focus on event-sourcing/CQRS as a data-source. We will describe how the techniques from Kafka, combined with a micro services architecture, are able to optimize an AML platform.

## Look Ma, no Code! Building Streaming Data Pipelines with Apache Kafka and KSQL ##

Have you ever thought that you needed to be a programmer to do stream processing and build streaming data pipelines? Think again! Companies new and old are all recognising the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka streaming platform. With Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event driven architectures and the population of multiple downstream systems. These data pipelines can be built using configuration alone. In this talk, we'll see how easy it is to stream data from a database such as Oracle into Kafka using the Kafka Connect API. In addition, we'll use KSQL to filter, aggregate and join it to other data, and then stream this from Kafka out into multiple targets such as Elasticsearch and MySQL. All of this can be accomplished without a single line of code! Why should Java geeks have all the fun?

Anmäl dig här

B