Ask Your Question

High availability (fault tolerance)

asked 2018-03-21 08:57:35 -0500

oleksii.petrovskyi gravatar image

updated 2018-03-21 10:41:03 -0500

jeff gravatar image

After reading the documentation I found out that fault tolerance could be done by running pipelines in cluster mode. Is'n a cluster mode an overhead to provide fault tolerance? Could I simply run two instances of SDC and have the same set of fault tolerance as in cluster mode (i'm not talking about scalability, only failover scenario). Please correct me if I'm wrong.

edit retag flag offensive close merge delete

2 Answers

Sort by ยป oldest newest most voted

answered 2018-03-21 11:02:50 -0500

oleksii.petrovskyi gravatar image

Thanks, Jeff.

My flow is:

  • SDC get data from REST API application_A
  • SDC transform data and push it to kafka topic_A
  • REST API application_B process message from topic_A and push result to topic_B
  • SDC read topic_B and post result to application_A

This is very alike your scenario but more complicated. What do you think - should two instances of SDC work as a failover in my case?

edit flag offensive delete link more

answered 2018-03-21 10:41:56 -0500

jeff gravatar image

Depending on the specific origins you are using, yes, this could work for HA. Take the HTTP server as an example. You can have two different SDC instances providing the HTTP server origin, and have an HTTP load balancer in front of them. If one pipeline or SDC instance crashes, the other will continue handling requests.

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower


Asked: 2018-03-21 08:57:35 -0500

Seen: 539 times

Last updated: Mar 21 '18