Caching Server In Docker Container Orchestrated In Kubernetes
I want to implement caching server in a docker container and the whole cluster is orchestrated by kubernetes.
Below is the diagram with the data flow.
Is this setup according to the best practices ? If not then please suggest the best way.
I would recommend:
Use Kubernetes Operators or StatefulSets with MongoDB instead of stand-alone MongoDB pod. It is extremely dangerous by putting the MongoDB in a pod without any persistence & replica instance. You can find MongoDB operator here.
Use a separate cache-server in different pods or deployment with master-slave replication for your application. Operators might help you simplify this.
Attach the Load Balancer with the Ingress Controller. We don't treat database pods as applications pods (unless they shared data with each other via external storage service) since they contains data and Service LoadBalancer can forward request to any pod in the deployment object.
This varies depending upon needs and requirements.
- → Maximum call stack exceeded when instantiating class inside of a module
- → Browserify api: how to pass advanced option to script
- → Node.js Passing object from server.js to external modules?
- → gulp-rename makes copies, but does not replace
- → requiring RX.js in node.js
- → Remove an ObjectId from an array of objectId
- → Can not connect to Redis
- → React: How to publish page on server using React-starter-kit
- → Express - better pattern for passing data between middleware functions
- → Can't get plotly + node.js to stream data coming through POST requests
- → IsGenerator implementation
- → Async/Await not waiting
- → (Socket.io on nodejs) Updating div with mysql data stops without showing error