| Runtime | Startup Time |
|---|---|
| Quarkus Native Image | 0.049s |
| Micronaut Native Image | 0.050s |
| Spring Boot Native Image | 0.104s |
| Quarkus JVM | 0.656s |
| Spring Boot JVM | 1.909s |
Why Oracle Cloud Infrastructure is the Ideal Platform for Kotlin Enterprise & Platform Engineering
Post 0 of the Kotlin + OCI Platform Engineering series
Introduction
You’ve chosen Kotlin + GraalVM for your platform. The question isn’t whether OCI can run it — every major cloud can. The question is which cloud was built to run it.
Kotlin is no longer just an Android language. At KotlinConf 2025, JetBrains reported that 50% of its 2.5 million developer base now uses Kotlin for backend and server-side work. That’s a structural shift, not a trend. Combine it with the K2 compiler — default in IntelliJ IDEA 2025.1 — delivering 40%+ faster compilation, and you have a server-side ecosystem maturing fast.
GraalVM Native Image has matured alongside it. Native compilation is production-ready across every major Kotlin-compatible framework: Spring Boot with AOT, Micronaut (and Oracle’s Graal Dev Kit for Micronaut distribution), Ktor, and Quarkus. The startup numbers speak for themselves:
Source: Java 25 Framework Benchmarks, October 2025 (gillius.org)
Sub-100ms startup transforms what’s architecturally possible. Serverless becomes viable for stateful workloads, autoscaling responds to traffic in real time, and cold starts stop being a production incident.
Here’s the fact that changes the cloud conversation: OCI is the only hyperscaler where the cloud vendor is also the original author and maintainer of the JVM runtime — not just a distributor of it. Oracle doesn’t just support GraalVM — Oracle builds GraalVM. That vertical integration has concrete consequences. Every OCI customer gets Oracle GraalVM 17 and 21 at no additional charge. Profile-Guided Optimization, historically a paid enterprise feature, is free for OCI subscribers.
Compare that to AWS, where the closest cold-start mitigation — SnapStart — applies exclusively to Lambda. OCI’s GraalVM integration spans all compute: VMs, containers, and functions.
This isn’t academic. When your compiler vendor is also your cloud vendor, runtime bugs become first-party issues with first-party SLAs. Optimization features ship to OCI first. The toolchain — from build pipelines to fleet management — is designed around the assumption that you’re running GraalVM.
Those startup and memory numbers translate directly to infrastructure cost:
fewer nodes to handle the same throughput
smaller container images
lower memory per pod
reduced compute spend across the board
OCI’s Unique Advantages for Kotlin Development
Oracle builds GraalVM. When you run on OCI, you’re running on the cloud built by the people who built your compiler. That vertical integration — compiler vendor, JDK vendor, and cloud vendor as a single entity — creates optimization opportunities that third-party arrangements structurally cannot. This isn’t a partnership announcement or a marketplace listing. It’s ownership, and the engineering benefits compound at every layer of the stack.
First-Party GraalVM Integration Across OCI Services
Every major OCI compute surface ships with GraalVM support baked in — not bolted on:
| OCI Service | GraalVM Integration |
|---|---|
| Compute Instances | yum install graalvm-25-native-image on Oracle Linux 7/8/9/10 |
| Cloud Shell | GraalVM for JDK 17 pre-installed, zero-setup |
| Code Editor | GraalVM for JDK 17 pre-installed, zero-setup |
| DevOps Build Pipelines | Native build-spec.yml RPM packages: graalvm-17/21/25-native-image |
| Container Registry (OCR) | Official GraalVM container images for jdk and native-image, AMD64 + AArch64 |
The practical implication: a new team member can open Cloud Shell, compile a Kotlin service to a native image, and push it to OCR within minutes — no JDK installation, no JAVA_HOME gymnastics, no Dockerfile surgery to pull the right GraalVM base image. That’s what first-party integration buys you: fewer decisions on the Golden Path.
Profile-Guided Optimization — The Hidden Performance Lever
Most Kotlin teams don’t realize that Profile-Guided Optimization was historically a paid Oracle GraalVM feature. On OCI, it’s free — bundled with your subscription.
PGO works in three stages:
compile an instrumented binary
run it under realistic load to collect execution profiles
then recompile with those profiles baked in
The compiler auto-selects -O3 optimization, inlining the exact hot paths your service actually exercises in production. For Kotlin microservices with predictable request patterns — and most microservices have very predictable hot paths — PGO closes the throughput gap between native image and JIT.
Here’s what that looks like in an OCI DevOps pipeline:
# build-spec.yml for OCI DevOps
version: 0.1
steps:
- type: Command
name: “Install GraalVM 21”
command: yum -y install graalvm-21-native-image
- type: Command
name: “Instrument binary for PGO”
command: |
./mvnw package -Pnative -DskipTests -DinstrumentedBuild=true
./target/myservice &
./scripts/warmup.sh
kill $(cat /tmp/myservice.pid)
- type: Command
name: “Optimized native build with PGO”
command: ./mvnw package -Pnative -DskipTests -Doptimize=trueThree steps. No external tooling. The instrumented build collects profiles, the warmup script simulates production traffic, and the final build produces a binary that already knows where your code spends time. This is the kind of optimization that turns a 10-pod deployment into an 8-pod deployment — and that’s a line item your FinOps team will notice.
Ampere A1 + GraalVM: ARM Beating x86
Here’s the benchmark that should challenge your assumptions: Oracle GraalVM on OCI’s Ampere A1 outperforms Intel Xeon Gold + OpenJDK. ARM plus an optimizing compiler beats x86 plus a standard JDK.
Why? Ampere Altra uses a single-threaded core design — no simultaneous multithreading, no “noisy neighbor” contention between hardware threads sharing execution units. Each OCPU is a dedicated physical core delivering consistent, predictable throughput. For JVM workloads in multi-tenant Kubernetes clusters, that consistency matters more than raw peak clock speed. And A1 pricing at $0.01/OCPU-hr makes the cost-performance ratio difficult for any other hyperscaler to match.
Startup and Memory: The Container Density Story
Java 25 framework benchmarks (October 2025) make the cold-start case concrete:
| Runtime | Startup Time |
|---|---|
| Quarkus Native Image | 0.049s |
| Micronaut Native Image | 0.050s |
| Spring Boot Native Image | 0.104s |
| Quarkus JVM | 0.656s |
| Spring Boot JVM | 1.909s |
Native image startup is 13–39x faster depending on framework. Memory tells a similar story — up to 50% RSS reduction. A Micronaut web server drops from 470 MB on full JDK to 22 MB as a static native image. That’s not a performance curiosity; it’s the difference between 10 pods and 40 pods on the same node.
For long-running, CPU-bound workloads, PGO needs profiling runs from time to time. With PGO and G1, Native Image has on-par performance, lower memory, and similar or better p99 latency. Native image dominates cold-start latency, container density, and serverless scale-to-zero.
Java Management Service: Fleet Governance at Scale
Once you’re running Kotlin services across dozens of OKE clusters, the operational question shifts from “how fast does it start” to “what JDK versions are running, are they patched, and can we prove it to auditors.” That’s where Java Management Service fills a gap no other hyperscaler addresses:
| JMS Capability | Business Value |
|---|---|
| Fleet discovery | Automatic detection of all Java runtimes across OCI workloads |
| JDK lifecycle management | Centralized download, install, configure, patch |
| JVM Tuning Recommendations | Performance Analysis for production tuning |
| JDK Flight Recorder integration | Deep application-level profiling |
| Java Migration Analysis | Automated feasibility assessment for JDK upgrades |
| Crypto Event Analysis + Java Libraries scanning | Supply chain security for SOC2 compliance |
That last row deserves emphasis. Crypto Event Analysis scans your fleet for weak cryptographic algorithms. Java Libraries scanning identifies vulnerable dependencies. Together, they automate the evidence collection that SOC2 auditors require — turning a weeks-long manual process into a dashboard query. For organizations running hundreds of JVM services, this isn’t a nice-to-have; it’s the difference between passing an audit in days versus months.
These performance and governance advantages are real, but they don’t exist in a vacuum — they compound when paired with OCI’s pricing model.
Cost-Effectiveness
AWS gives you 100 GB of free egress. OCI gives you 10 TB. That’s not a rounding error — it’s a different pricing philosophy. And once you internalize the difference, every cost model you build for multi-service Kotlin platforms changes fundamentally.
Egress: The Hidden Tax
Most teams don’t budget for egress until it shows up on the invoice. At scale, it’s the line item that blows up FinOps models.
| Cloud | Free Tier Egress | Paid Rate |
|---|---|---|
| OCI | 10 TB/month | ~$0.0085/GB |
| AWS | 100 GB/month | $0.09/GB |
| Azure | 100 GB/month | $0.087/GB |
| GCP | 100 GB/month | $0.12/GB (Standard Tier) |
At 50 TB/month — a realistic number for a platform running observability pipelines, API traffic, and cross-region replication:
| Cloud | 50 TB/month Cost |
|---|---|
| OCI | ~$340 |
| AWS | ~$4,420 |
| Azure | ~$3,400 |
| GCP | ~$3,400 |
OCI is over 10x cheaper. Not because of a promotional discount — because the pricing model treats egress as infrastructure, not a profit center.
Always Free: What You Can Actually Build
Forget spec sheets. Here’s what a team can run permanently at zero cost on OCI’s Always Free tier:
OKE Basic cluster with Arm A1 worker nodes (4 OCPUs + 24 GB RAM)
2–3 Kotlin microservices compiled to GraalVM native images (remember — 40 MB per service, not 350 MB)
OCI API Gateway (1M requests/month)
OCI Monitoring + APM (500M data points + 1,000 trace events)
10 TB/month outbound egress
That’s a legitimate proof-of-concept platform — not a sandbox. No 12-month expiration. No credit card surprise after a trial period.
Caveat: Oracle may reclaim Always Free compute instances idle for 7+ consecutive days (CPU/memory/network utilization below 20%). Size your workloads accordingly.
OKE: Serverless Kubernetes at a Third of the Price
Per-unit pricing tells the real story. These are published list rates using each provider’s lowest-cost region — except OCI, which charges the same rate everywhere.
| Metric | EKS (Fargate) | AKS (Virtual Nodes) | GKE Autopilot | OKE Virtual Nodes |
|---|---|---|---|---|
| Control plane | $0.10/cluster-hr | $0.10/cluster-hr | $0.10/cluster-hr | $0.10/cluster-hr |
| CPU | $0.04048/vCPU-hr | $0.04050/vCPU-hr | $0.0445/vCPU-hr | $0.0125/vCPU-hr |
| Memory | $0.004445/GiB-hr | $0.00445/GiB-hr | $0.0049225/GiB-hr | $0.0015/GiB-hr |
Scenario A — 20 pods, each 2 vCPU / 8 GiB, running 31 days:
| Provider | Monthly Cost |
|---|---|
| EKS (Fargate) | $1,808.21 |
| AKS Virtual Nodes | $1,809.41 |
| GKE Autopilot | $1,910.29 |
| OKE Virtual Nodes | $658.44 |
OCI is 2.7–2.9x cheaper across the board.
These comparisons are based on Oracle’s March 2023 published pricing analysis. Cloud pricing evolves — verify current OKE, EKS, AKS, and GKE rates before making procurement decisions.
Enterprise Scale: Where the Numbers Get Serious
Oracle’s published case study describes a financial services customer running a large Kubernetes fleet: EKS (Fargate) at ~$723K/month vs. OKE at ~$335K/month — a delta of ~$388K/month, or roughly $4.6M/year in savings. This is Oracle’s own example; run your own TCO analysis with your actual workload profiles before making procurement decisions.
Regional Pricing Parity
OCI charges the same rate in every region globally. AWS charges up to +72% in São Paulo, +35% in Zurich, +25% in Tokyo. Azure and GCP apply similar regional premiums.
The implication for platform engineering: a multi-region deployment across Tokyo, Frankfurt, and São Paulo on OCI costs the same as a single US East region on OCI. On AWS, that identical footprint costs 25–72% more per additional region. If you’re building a Golden Path that includes multi-region by default, OCI is the only provider where the pricing model doesn’t punish you for it.
Predictable Pricing Is an Engineering Property
Predictable pricing — no egress surprises, no inter-AZ transfer fees, no regional premiums. You can model it accurately, budget for it, and audit against it. For SOC2-compliant platforms where cost anomalies trigger investigation workflows, pricing predictability reduces operational overhead directly.
The Compound Effect: GraalVM + OKE
This is where the performance and cost stories compound. GraalVM native images reduce memory footprint by ~87% (350 MB → 40 MB per service). Fewer megabytes per pod means fewer pods per node. Fewer pods per node means fewer nodes in the cluster. Fewer nodes means a lower OKE bill — at rates already 2.7x cheaper than the competition. The savings are multiplicative across the stack, not additive.
A Kotlin service that costs $1,808/month on EKS Fargate with JVM containers could cost under $250/month on OKE Virtual Nodes with GraalVM native images. That’s not optimization — that’s a different cost structure entirely.
Platform Engineering Capabilities
Cheap compute is table stakes. What matters is what you can build on it.
A platform team’s job is to pave Golden Paths: opinionated, secure-by-default workflows that let application teams ship without filing tickets. On OCI, four capabilities define those paths:
OKE Workload Identity for zero-credential IAM,
Pulumi for typed infrastructure, and
a clear service mesh strategy.
OKE Workload Identity — The Architectural Centerpiece
If you take one thing from this section, make it this: OKE Workload Identity eliminates credentials from your Kubernetes platform.
Every pod running on an OKE Enhanced cluster gets an IAM-native identity scoped to three dimensions: the cluster OCID, the Kubernetes namespace, and the service account. That triple acts as a cryptographic principal in OCI’s IAM policy engine.
No static API keys.
No credentials rotated on a schedule by a pipeline you hope still works.
No Kubernetes Secrets containing base64-encoded access tokens that any developer with
kubectl get secretcan decode.
The policy syntax makes the scoping explicit:
Allow any-user to manage object-family in compartment production
where all {
request.principal.type=’workload’,
request.principal.cluster_id=’<cluster-ocid>’,
request.principal.namespace=’payments-ns’,
request.principal.service_account=’payment-processor-sa’
}Read that policy out loud. Only pods in the payments-ns namespace, running under the payment-processor-sa service account, on that specific cluster, can touch Object Storage in the production compartment. If someone deploys the same service account name on a dev cluster, it gets nothing. The cluster OCID is part of the identity — not an afterthought.
On the application side, using the Java OCI SDK makes this trivially consumable:
val provider = OkeWorkloadIdentityAuthenticationDetailsProvider.builder()
.build()
val objectStorageClient = ObjectStorageClient.builder()
.build(provider)Two lines. No configuration files, no environment variable wiring, no init containers fetching tokens. The SDK discovers the workload identity from the pod’s projected service account token automatically. Your application code doesn’t know it’s running on Kubernetes — it just has an authenticated OCI client.
Open source Micronaut (including the Oracle build) have higher level abstractions on OCI (and other cloud) services.
Every API call made through that client is recorded in the OCI Audit service with the full workload identity principal. When your SOC2 auditor asks “which service accessed this bucket, from which cluster, in which namespace, at what time?” — you have the answer without building a single additional logging pipeline.
The catch: Workload Identity requires OKE Enhanced clusters. That’s $0.10/cluster-hour, capped at $74.40/month for each OKE enhanced cluster you provision. For production workloads where you’d otherwise spend engineering time building and maintaining a secrets rotation pipeline, this counts as a rounding error.
GraalOS — Serverless with Oracle’s Native Runtime
OCI Functions runs on the Fn Project (open source), but the runtime story goes deeper. GraalOS compiles your functions to native executables using GraalVM’s ahead-of-time compiler. Functions don’t boot a JVM — they execute as native binaries on Oracle’s serverless substrate.
The impact is measurable. Oracle claims startup in seconds versus tens of seconds for traditional JVM serverless. Memory consumption drops up to 50% compared to traditional JVM serverless functions. That eliminates the need for provisioned concurrency — the strategy AWS teams use to mask Lambda cold starts at the cost of paying for idle compute.
The constraint: GraalOS requires Micronaut-based functions built with the Graal Development Kit (GDK), targeting Java 17+. If your team is already on Spring Boot, you’ll need Micronaut for the serverless tier. That’s a real trade-off — but for new function development on a Kotlin stack, Micronaut’s compile-time DI and first-class Kotlin support make it a natural fit.
Pulumi — Typed IaC with an Honest Assessment
For teams that want infrastructure defined in Kotlin rather than YAML, Pulumi is the path — but you should know what you’re buying.
Path 1: pulumi-kotlin — a community library maintained by VirtuslabRnD (~54 GitHub stars, experimental). It wraps the official Pulumi providers with an idiomatic Kotlin DSL. Type-safe, expressive, and actively developed — but community-maintained means you own the risk.
Path 2: Pulumi Java SDK — officially supported by Pulumi. Since Kotlin compiles to JVM bytecode, you call the Java SDK directly with full interop. Less idiomatic, but backed by Pulumi’s support and release cadence.
Oracle’s investment here is tangible:
four official blog posts on Pulumi + OCI (
https://blogs.oracle.com/cloud-infrastructure/pulumi-oci-iac-approach-deploy-manage-resources
https://blogs.oracle.com/developers/pulumi-brings-universal-infrastructure-as-code-to-oracle-cloud
https://blogs.oracle.com/cloud-infrastructure/terraform-to-pulumi-multiple-iac-tools-cloud
https://blogs.oracle.com/cloud-infrastructure/compare-terraform-pulumi-oci-multicloud-manage)
plus
oracle-devrelreference implementations on GitHub for OKE deployment
My recommendation: use pulumi-kotlin if your team is comfortable depending on a community library and values the DSL ergonomics. Fall back to the Java SDK when you need the stability guarantees of an officially supported path.
Service Mesh Managed
OCI gives you one managed option, Istio as an OKE Add-on is the right default for most teams. Oracle manages the Istio lifecycle — upgrades, patches, compatibility with OKE versions. Your team likely already knows Istio’s traffic management and observability model. Choosing the managed add-on means you keep that expertise and shed the operational burden.
Enterprise-Grade Features
Before you deploy, your Architect needs to sign off on compliance. Here’s what they’ll find.
| Certification | Coverage |
|---|---|
| ISO/IEC 27001:2013 | Stage 2 certified |
| SOC 1 Type 2 | Security and availability controls |
| SOC 2 Type 2 | AICPA Trust Services — Security and Availability |
| SOC 3 | Public-facing attestation |
| PCI DSS Level 1 | Covers Compute, Networking, LB, Block Volumes, Object Storage, Database, OKE |
| HIPAA | Supported for OCI and Oracle Database@Azure |
| ISO/IEC 27017 | Cloud-specific security controls |
| ISO/IEC 27018 | PII protection in public clouds |
That table clears most procurement checklists in a single pass. But certifications are table stakes. The differentiators live deeper in the stack.
FIPS 140–2 Level 3 HSMs Included
OCI Vault ships with FIPS 140–2 Level 3 certified HSMs as a built-in capability. Not a separate SKU. Not a separate billing line. AWS CloudHSM is also Level 3 — but it’s a standalone service you provision separately and pay for independently. On OCI, it’s part of the platform.
OCI also supports BYOK (Bring Your Own Key) and HYOK (Hold Your Own Key), where encryption keys are stored entirely outside OCI on third-party HSMs. For regulated industries — banking, healthcare, government — HYOK is often the difference between a “yes” and a six-month legal review.
Two-Region-Per-Country: Data Residency as an Engineering Feature
OCI operates 50+ regions across 28 countries. Oracle’s strategy is deliberate: every country gets at least two regions. That means in-country disaster recovery without cross-border data transfer.
This isn’t a geography trivia point. GDPR in the EU, DPDP in India, the UAE’s data localization framework, Australia’s Privacy Act amendments — all impose constraints on where data can physically reside during failover. If your DR target is in another jurisdiction, you’ve created a compliance incident, not a recovery.
Two regions per country turns data residency from a legal problem into an infrastructure configuration.
Autonomous Database: Fewer Runbooks, Smaller Rotations
For Platform Engineering teams, Autonomous Database is an ops burden argument. Self-patching. Zero-downtime upgrades. Compute auto-scaling to 3x base ECPU on demand — no capacity planning tickets, no 2 AM scaling events.
Automated Regression Detection (shipped April 2025) catches performance degradation during maintenance windows before it reaches your SLIs. That’s one less runbook your on-call engineer has to execute and one fewer unplanned weekend.
The AI capabilities are production-relevant too: native Vector Search for RAG pipelines and Select AI for NL-to-SQL queries against operational data.
These are GA features your Kotlin services can call today via JDBC.
Database@Azure, @Google Cloud, @AWS: OCI Inside Your Current Cloud
This is OCI’s most strategically interesting play. Oracle deploys Autonomous Database and Exadata infrastructure inside Azure, Google Cloud, and AWS data centers. Private cross-cloud interconnect — no public internet hop, no double-egress billing.
For enterprise teams already running on AWS or Azure, this isn’t a migration ask. It’s an integration path. You keep your existing compute, networking, and CI/CD. You add Oracle’s database capabilities alongside them, connected over a private fabric. Your Kotlin services on EKS or AKS talk to Autonomous Database at local-network latency.
FastConnect: Flat-Rate Private Connectivity
Dedicated connections from 1 Gbps to 400 Gbps. Flat hourly rate based on port capacity. No per-byte egress charges — the same pricing philosophy we covered in the cost section, extended to dedicated interconnects.
For hybrid architectures moving terabytes between on-premises data centers and OCI, predictable network costs aren’t a nice-to-have but a FinOps requirement.
Developer Experience
Enterprise credentials get you through procurement. Developer experience gets you through sprint one.
A Kotlin team evaluating OCI has one question: can I be productive before the end of the day? The answer starts in the browser.
Cloud Shell delivers zero-setup from first login. You opened the OCI Console. GraalVM, Maven, Gradle, kubectl, Terraform — already there. Pre-authenticated to your tenancy. No local install, no credential management, no “works on my machine.” An engineer goes from zero to a compiled GraalVM native image in under ten minutes. That’s what happens when the cloud vendor and the JVM vendor are the same company.
The OCI Java SDK ships weekly. Version 3.81.0 dropped March 03, 2026. Weekly release cadence signals something specific: active maintenance and responsive bug fixes. When a production team evaluates long-term platform commitment, release velocity is a proxy for engineering investment. OCI’s SDK passes that test.
Adding it to a Kotlin project is one line:
implementation(”com.oracle.oci.sdk:oci-java-sdk:3.81.0”)The SDK is Java-native, but Kotlin interop is clean. The builder pattern maps naturally, and the authentication model covers every deployment context — local dev, VM-based services, and Kubernetes pods:
val provider: AuthenticationDetailsProvider = ConfigFileAuthenticationDetailsProvider(
“~/.oci/config”, “DEFAULT”
)
val instancePrincipal = InstancePrincipalsAuthenticationDetailsProvider.builder()
.build()
val workloadIdentity = OkeWorkloadIdentityAuthenticationDetailsProvider.builder()
.build()
val objectStorageClient = ObjectStorageClient.builder()
.build(provider)Three providers, one interface.
ConfigFilefor local development,InstancePrincipalsfor compute instances,OkeWorkloadIdentityfor pods. Your service code never changes between environments — only the provider wiring does.
The highest-value Kotlin-specific improvement is the coroutines bridge. The SDK’s async clients return java.util.concurrent.Future. One import — kotlinx.coroutines.future.await — turns every async SDK call into a suspend function. No callback hell, no reactive wrappers, no custom threading:
import kotlinx.coroutines.future.await
suspend fun getObject(
client: ObjectStorageAsyncClient,
namespace: String,
bucket: String,
objectName: String
): GetObjectResponse {
return client.getObject(
GetObjectRequest.builder()
.namespaceName(namespace)
.bucketName(bucket)
.objectName(objectName)
.build()
).await()
}That’s idiomatic structured concurrency wrapping a Java SDK — no Kotlin-specific SDK required. Every *AsyncClient in the SDK becomes coroutine-native with this pattern.
For IDE tooling, the IntelliJ plugin (v1.2.0, June 2025) remains actively maintained. Autonomous Database management, multi-region support, credential management — all from the IDE Kotlin teams already live in. It’s not a checkbox feature; it’s the primary plugin for JetBrains-ecosystem developers on OCI.
CI/CD is equally straightforward. A complete Kotlin + GraalVM native image pipeline in OCI DevOps requires one file:
# build-spec.yml — OCI DevOps
version: 0.1
component: build
timeoutInSeconds: 600
steps:
- type: Command
name: Install GraalVM 21
command: |
yum -y install graalvm-21-native-image
java -version
native-image --version
- type: Command
name: Build Kotlin service
command: |
./gradlew clean build -x test
- type: Command
name: Compile GraalVM native image
command: |
./gradlew nativeCompile
- type: Command
name: Build and push container image
command: |
docker build -f Dockerfile.native -t ${OCIR_PATH}/${IMAGE_NAME}:${BUILDRUN_HASH} .
docker push ${OCIR_PATH}/${IMAGE_NAME}:${BUILDRUN_HASH}Four steps:
install GraalVM from Oracle’s own RPM repo (license included in your OCI subscription),
build the Kotlin project,
compile to native image,
push to OCI Container Registry.
Copy-paste this into your repo and you have a production-grade native image pipeline.
One more thing worth stating plainly: you can build everything in this series for $0, permanently. The Always Free tier includes 4 OCPUs, 24 GB RAM on Ampere A1, and an OKE Basic cluster — no 90-day expiry, no credit card surprise. That’s not a toy sandbox. That’s enough to run Kotlin microservices, a Kubernetes cluster, and an Autonomous Database as a genuine evaluation platform.
What’s Coming in This Series
This article made the case. The next articles build the platform. Each one produces working infrastructure and deployable code — no toy examples.
Foundation: OCI + Kotlin architectural case (you are here)
Post 1: Infrastructure (VCN, networking, Pulumi Kotlin components)
└─► Post 2: Kubernetes Platform (OKE + GitOps + observability)
├─► Post 3: Native Images (GraalVM microservices → OKE)
├─► Post 4: IDP Platform (Pulumi Automation API → OKE)
├─► Post 5: Event-Driven (Kafka/OCI Streaming → OKE)
└─► Post 6: Legacy Migration (Java→Kotlin + ADB → OKE)Post 1 stands up the entire OCI foundation — VCN, subnets, gateways, security groups, compartments — as a reusable Kotlin component library in Pulumi. We write Pulumi in Kotlin, not YAML. You’ll ship a FinOps-ready tagging strategy that every subsequent resource inherits. Nothing else deploys until this is solid.
Post 2 builds a hardened OKE cluster on that foundation. ArgoCD delivers GitOps. Prometheus, Grafana, and OpenTelemetry handle observability. cert-manager, External Secrets Operator, OCI IAM integrated with Kubernetes RBAC — a production-grade platform developers self-serve from day one.
Post 3 takes a Spring Boot + Kotlin REST API, compiles it to a GraalVM native image, and deploys it to OKE. You’ll run the benchmark yourself: JVM vs. native on startup time, memory footprint, and sustained throughput. A CI/CD pipeline automates native builds end to end.
Post 4 is the central Platform Engineering deliverable. You build an Internal Developer Platform — an API that lets teams self-provision pre-approved OCI resources through Pulumi Automation API. CrossGuard policy packs enforce tagging, cost ceilings, and encryption requirements before anything touches production.
Post 5 wires up a full event-driven order processing system using Kafka producers and consumers with Kotlin coroutines, the Saga pattern for distributed transactions, and HPA scaling keyed to consumer lag on OCI Streaming.
Post 6 walks through a Java-to-Kotlin migration of a real Spring Boot application. Blue-green and canary deployments on OKE validate every step. You’ll measure the before-and-after: cost, startup, memory, throughput.
Everything you need to start is free. An OCI Always Free account gives you 4 Arm Ampere OCPUs, 24 GB of RAM, and a Kubernetes cluster you can run indefinitely.
Post 1 is waiting.


