Kong is the world’s most used open-source API gateway, known for its lightweight footprint and high extensibility. It’s trusted by thousands of enterprises to handle billions of API calls. But even the best software is limited by its underlying hardware. By pairing Kong Gateway with Kamatera’s enterprise-grade cloud infrastructure, you eliminate the bottlenecks that plague traditional cloud environments.
Kamatera delivers the backbone to run Kong Gateway flawlessly—with guaranteed performance, global reach, and the flexibility to scale instantly.

Why Run Kong on Kamatera?
Kamatera’s cloud infrastructure is purpose-built for demanding workloads. With guaranteed CPU resources, high-speed SSD storage, and low-latency networking, your Kong deployment handles traffic spikes without breaking a sweat.
With Kamatera’s 20+data centers spanning four continents, you can deploy Kong instances close to your users and applications.
Kamatera’s flexible cloud platform lets you scale Kong vertically or horizontally in minutes. Add resources during peak hours, spin up new instances for testing, or deploy across multiple regions.
Protect your APIs with Kong’s robust security features—authentication, rate limiting, IP restrictions, and more—all running on Kamatera’s secure infrastructure with DDoS protection and built-in firewall.
Price Calculator
Data Centers Around the Globe
Frequently asked questions
Kong is an open-source API gateway and service mesh built on top of NGINX. It sits between clients and your backend services, handling cross-cutting concerns like authentication, rate limiting, logging, load balancing, and request transformation. Kong helps you manage, secure, and observe your APIs and microservices without adding complexity to your application code. It’s widely used in production by companies of all sizes and processes billions of API requests daily.
Based on the expected size and demand of the cluster, we recommend the following resource allocations as a starting point:
Development:
1-2 cores CPU
2-4 GB RAM
Small:
1-2 cores CPU
2-4 GB RAM
Medium:
2-4 cores CPU
4-8 GB RAM
Large:
8-16 cores CPU
16-32 GB RAM
For more details, refer to the Kong Resource Sizing Guidelines.
For high availability, deploy multiple Kong instances behind a load balancer. Kamatera’s load balancing service or your own software load balancer (like HAProxy or NGINX) can distribute traffic across Kong nodes. If using a database backend, set up PostgreSQL or Cassandra in a replicated configuration across multiple servers. For the highest availability, deploy Kong instances across multiple Kamatera data centers and use global load balancing to route traffic to the nearest healthy instance.
Start by scaling vertically—increase CPU and RAM on your Kamatera servers. Once you reach the limits of vertical scaling or need redundancy, scale horizontally by adding more Kong nodes behind a load balancer. Each Kong instance operates independently, so adding nodes is just a matter of provisioning more servers with identical configurations. You can also scale your database separately if it becomes a bottleneck. Monitor your metrics to understand whether you need more Kong nodes, more powerful nodes, or database optimization.
Kamatera provides secure infrastructure with DDoS protection, private networking options, and firewall capabilities. You’re responsible for securing your Kong deployment—keeping software updated, configuring proper authentication and authorization, restricting network access, using SSL/TLS, and following security best practices. Kong itself provides robust security features for your APIs including authentication plugins, rate limiting, IP restrictions, and request validation.
Our 30-day free trial includes one server worth up to $100. You can set up your free VPS server, install an operating system, and select a location from one of our 20+ data centers worldwide.
If you choose monthly billing, you will receive your first invoice the month after the free trial expires. For example, if you start your free trial on November 20, the free trial will be until December 20. If you choose to continue using our services and don’t terminate your server, your first invoice will be sent out after January 1. That invoice will include a prorated charge for December 20-31, as well as the month of January.
Our flexible monthly and hourly pricing models allow you to keep your costs under control. If you choose an hourly server, we bill for the resources you use. You’re only charged for the time your server is running. You can see real-time usage in your dashboard, and there are no surprise charges or hidden fees.
