You have 10 years of Java. You built the palace. Now learn to build the city — using storytelling, visual analogies, and memory techniques that stick forever.
Before a single annotation — you need to feel the difference. Here is the story that will never leave your brain.
In your 10 years of Java, you built this. One WAR/EAR file. One shared database. One deployment. Change the Order module → rebuild everything → deploy at 3AM → pray nothing breaks.
The real pain you've felt: Long build cycles · One bug kills everything · Scale the whole app just because Payments is slow · All teams step on each other · Can't upgrade Java version in just one module
Each microservice is an independent Spring Boot application. Its own database, its own deployment, its own team. They talk over the network. You scale only what's hot. One service going down doesn't bring down the city.
| Dimension | Monolith (Your Past) | Microservices (Your Future) |
|---|---|---|
| Deployment | One WAR for everything | Each service deployed independently |
| Scaling | Scale everything — even for one hot feature | Scale only Payment Service ×10 |
| Failure | One bug = full outage | Payment fails, Orders still work |
| Database | One shared database for all teams | Each service owns its own DB |
| Teams | All devs touch the same codebase | Each team owns one service |
| Tech stack | Locked to one version everywhere | Each service can use different tech |
Trace every step a request takes from the client to your business logic and back.
One service does one thing. Order Service only manages orders. Never let it touch payments or user profiles. Conway's Law: your service boundaries should mirror your team structure.
Each service has its own private database. No other service may access it directly. This is the hardest and most important rule. Violate it and you've created a distributed monolith — all the complexity, none of the benefits.
Services talk via REST (synchronous) or messaging (asynchronous). No shared memory. No direct method calls between services. This forces loose coupling at the cost of latency.
Any service can go down at any time. Circuit breakers, retries, timeouts, and graceful fallbacks are not optional — they are the price of being distributed. Assume everything will fail.
No central orchestration bus. No shared libraries that couple releases. Each team deploys independently, using its own CI/CD pipeline, on its own schedule.
How do services find each other when IPs change with every deployment?
When Order Service starts, it tells Eureka: "Hi! I'm order-service, I'm at 192.168.1.x:8081". Eureka stores this. It also sends a heartbeat every 30s. If heartbeat stops → Eureka removes the instance.
When Payment Service wants to call Orders, it asks Eureka: "Give me a healthy instance of order-service". Eureka returns the IP + port. This happens automatically with Feign.
Eureka returns ALL healthy instances. Spring Cloud LoadBalancer picks one (round-robin by default). If an instance is unhealthy, it's removed. Zero manual IP management.
@SpringBootApplication @EnableEurekaServer // ← ONE annotation = full registry server public class EurekaServerApplication { public static void main(String[] args) { SpringApplication.run(EurekaServerApplication.class, args); } } # application.yml server: port: 8761 eureka: client: register-with-eureka: false # server doesn't register itself fetch-registry: false # Visit http://localhost:8761 — you get a beautiful dashboard # showing all registered services in real time!
# Every microservice adds these lines to register itself spring: application: name: order-service # ← THIS is the name others use to find you eureka: client: service-url: defaultZone: http://eureka-server:8761/eureka/ instance: prefer-ip-address: true lease-renewal-interval-in-seconds: 30 # heartbeat frequency lease-expiration-duration-in-seconds: 90 # Maven dependency needed: # spring-cloud-starter-netflix-eureka-client
// Feign = declarative HTTP. Zero boilerplate. Looks like a local call! @FeignClient(name = "inventory-service") // name = Eureka registration name public interface InventoryClient { @GetMapping("/api/inventory/{skuCode}") Boolean isInStock(@PathVariable String skuCode); @PostMapping("/api/inventory/reserve") ReserveResponse reserve(@RequestBody ReserveRequest request); } // Usage in OrderService — no URL, no IP, no port, no RestTemplate! @Service public class OrderService { @Autowired private InventoryClient inventoryClient; public Order placeOrder(OrderRequest request) { if (!inventoryClient.isInStock(request.getSkuCode())) { throw new OutOfStockException("Item not available"); } // ... save order } } // Enable Feign in main class: @EnableFeignClients
@EnableEurekaServer = the whole directory. Clients say their name in application.yml. Others call them by that name in @FeignClient. No IP ever needed."Without a gateway: every client knows every service URL. Auth logic lives in every service. CORS configured everywhere. Rate limiting — nowhere. With a gateway: one URL for everything. Auth once. Route anywhere.
spring:
cloud:
gateway:
routes:
- id: order-service
uri: lb://order-service # lb:// = load balanced via Eureka
predicates:
- Path=/api/orders/** # match this URL pattern
filters:
- StripPrefix=1 # remove /api prefix before forwarding
- name: CircuitBreaker
args:
name: orderCB
fallbackUri: forward:/fallback/orders
- name: RequestRateLimiter
args:
redis-rate-limiter.replenishRate: 10
redis-rate-limiter.burstCapacity: 20
- id: payment-service
uri: lb://payment-service
predicates:
- Path=/api/payments/**
- Method=POST # only route POST requests
filters:
- AddRequestHeader=X-Gateway-Source, spring-cloud-gateway@Component public class AuthFilter implements GlobalFilter, Ordered { @Autowired private JwtUtil jwtUtil; @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { String path = exchange.getRequest().getPath().toString(); // Skip auth for public endpoints if (path.startsWith("/api/public")) { return chain.filter(exchange); } String authHeader = exchange.getRequest().getHeaders().getFirst("Authorization"); if (authHeader == null || !authHeader.startsWith("Bearer ")) { exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED); return exchange.getResponse().setComplete(); } String token = authHeader.substring(7); if (!jwtUtil.isValid(token)) { exchange.getResponse().setStatusCode(HttpStatus.UNAUTHORIZED); return exchange.getResponse().setComplete(); } // Optionally forward userId to downstream services ServerHttpRequest mutated = exchange.getRequest().mutate() .header("X-User-Id", jwtUtil.extractUserId(token)) .build(); return chain.filter(exchange.mutate().request(mutated).build()); } @Override public int getOrder() { return -1; } // run first }
@RestController public class FallbackController { @GetMapping("/fallback/orders") public ResponseEntity<?> ordersFallback() { return ResponseEntity .status(HttpStatus.SERVICE_UNAVAILABLE) .body(Map.of( "status", "degraded", "message", "Order service temporarily unavailable. Try again shortly.", "timestamp", Instant.now() )); } @GetMapping("/fallback/payments") public ResponseEntity<?> paymentsFallback() { return ResponseEntity.accepted() .body(Map.of("message", "Payment queued. You'll receive confirmation via email.")); } }
lb://service-name = load-balanced route via Eureka. GlobalFilter = security scanner all passengers pass through. Circuit Breaker at gateway level = if a terminal is broken, redirect to waiting area (fallback)."@RefreshScope = change config in Git, all services update without restart.@SpringBootApplication @EnableConfigServer // ← That's the whole server. One annotation. public class ConfigServerApplication { ... } # application.yml server: port: 8888 spring: cloud: config: server: git: uri: https://github.com/your-org/config-repo search-paths: '{application}' # folder per service name default-label: main username: ${GIT_USERNAME} # keep secrets in env vars password: ${GIT_TOKEN}
# Spring Boot 2.x needs bootstrap.yml # Spring Boot 3.x uses spring.config.import spring: config: import: optional:configserver:http://config-server:8888 application: name: order-service # Config Server fetches order-service.yml from Git profiles: active: dev # fetches order-service-dev.yml # URL pattern served by Config Server: # /{application}/{profile} → order-service/dev → order-service-dev.yml # /{application}/{profile}/{label} (label = git branch)
// 1. Annotate beans that use config values with @RefreshScope @RefreshScope // ← re-creates this bean when config changes @RestController public class OrderController { @Value("${order.max-items:10}") // default value = 10 private int maxItemsPerOrder; @Value("${feature.express-delivery:false}") private boolean expressDeliveryEnabled; } // 2. Update the value in Git repo // 3. Call this endpoint — NO restart needed: // POST http://order-service:8081/actuator/refresh // 4. @RefreshScope beans are recreated with new values ✅ # Enable in application.yml: management: endpoints: web: exposure: include: refresh, health, info, metrics
config-repo/ ├── application.yml # shared by ALL services ├── order-service/ │ ├── order-service.yml # order service defaults │ ├── order-service-dev.yml # dev profile overrides │ └── order-service-prod.yml # prod profile overrides ├── payment-service/ │ ├── payment-service.yml │ └── payment-service-prod.yml └── inventory-service/ └── inventory-service.yml # Profile precedence (highest to lowest): # order-service-prod.yml → order-service.yml → application.yml
application.name. @RefreshScope + POST /actuator/refresh = update the law live, no demolishing and rebuilding the building."@Service public class OrderService { // Stack order: CB wraps Retry wraps TimeLimiter @CircuitBreaker(name = "paymentService", fallbackMethod = "paymentFallback") @Retry(name = "paymentService") // retry 3 times before CB counts failure @TimeLimiter(name = "paymentService") // timeout after 3s public CompletableFuture<String> processPayment(Order order) { return CompletableFuture.supplyAsync(() -> paymentClient.charge(order.getId(), order.getAmount())); } // ALWAYS provide a fallback. Never fail silently. // Method signature must match + add Exception parameter public CompletableFuture<String> paymentFallback(Order order, Exception e) { log.error("Payment failed, queuing for retry: {}", e.getMessage()); paymentQueue.enqueue(order); // save to retry later return CompletableFuture.completedFuture( "Order confirmed. Payment will be processed shortly."); } }
resilience4j:
circuitbreaker:
instances:
paymentService:
sliding-window-size: 10 # evaluate last 10 calls
failure-rate-threshold: 50 # open if ≥50% fail
wait-duration-in-open-state: 10s # stay open 10 seconds
permitted-number-of-calls-in-half-open-state: 3
slow-call-rate-threshold: 80 # also open if 80% are slow
slow-call-duration-threshold: 2s
retry:
instances:
paymentService:
max-attempts: 3
wait-duration: 500ms
retry-exceptions:
- java.io.IOException
- java.util.concurrent.TimeoutException
timelimiter:
instances:
paymentService:
timeout-duration: 3s # fail fast after 3 seconds
cancel-running-future: true// BULKHEAD ANALOGY: Ship compartments — if one floods, ship doesn't sink // Without bulkhead: slow Payment hogs ALL threads → Inventory calls also fail // With bulkhead: Payment has its OWN pool → Inventory pool unaffected @Bulkhead(name = "paymentService", type = Bulkhead.Type.THREADPOOL) @CircuitBreaker(name = "paymentService", fallbackMethod = "paymentFallback") public CompletableFuture<String> processPayment(Order order) { return CompletableFuture.supplyAsync(() -> paymentClient.charge(order)); } # application.yml resilience4j: thread-pool-bulkhead: instances: paymentService: max-thread-pool-size: 10 # Payment gets max 10 threads core-thread-pool-size: 5 queue-capacity: 20 inventoryService: max-thread-pool-size: 15 # Inventory gets its own pool core-thread-pool-size: 8
@Service public class OrderService { @Autowired private KafkaTemplate<String, OrderEvent> kafkaTemplate; public Order placeOrder(OrderRequest request) { Order order = orderRepository.save(buildOrder(request)); // Publish event — returns IMMEDIATELY. Doesn't wait for anyone. OrderEvent event = new OrderEvent(order.getId(), order.getAmount(), order.getSkuCode(), order.getUserId()); kafkaTemplate.send("order-placed-topic", event); // Order Service is DONE. It doesn't know or care that: // - Inventory Service will reserve the stock // - Notification Service will send confirmation email // - Analytics Service will update dashboards // They ALL get this event independently. Loose coupling! return order; } }
@Service public class NotificationService { @KafkaListener(topics = "order-placed-topic", groupId = "notification-group") public void handleOrderPlaced(OrderEvent event) { // ⚠️ CRITICAL: Kafka guarantees at-least-once delivery. // The SAME message may arrive TWICE (e.g., after a consumer crash). // Your consumer MUST be idempotent! if (processedEventRepo.exists(event.getOrderId())) { log.info("Duplicate event, skipping: {}", event.getOrderId()); return; // already processed — idempotent check! } emailService.sendConfirmation(event.getUserId(), event.getOrderId()); processedEventRepo.markProcessed(event.getOrderId()); } } // InventoryService also consumes the same event, independently: @KafkaListener(topics = "order-placed-topic", groupId = "inventory-group") public void reserveStock(OrderEvent event) { inventoryService.reserve(event.getSkuCode(), event.getQuantity()); }
// RabbitMQ: better for task queues, complex routing // Kafka: better for event streaming, high throughput, replay // Producer @Autowired private RabbitTemplate rabbitTemplate; rabbitTemplate.convertAndSend("order.exchange", "order.placed", event); // Consumer @RabbitListener(queues = "order.notification.queue") public void handleOrder(OrderEvent event) { emailService.send(event); } // Config @Bean public Queue orderQueue() { return new Queue("order.notification.queue"); } @Bean public TopicExchange exchange() { return new TopicExchange("order.exchange"); } @Bean public Binding binding(Queue q, TopicExchange e) { return BindingBuilder.bind(q).to(e).with("order.#"); }
spring:
kafka:
bootstrap-servers: localhost:9092
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
acks: all # wait for all replicas before success
retries: 3
consumer:
group-id: notification-group
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring.json.trusted.packages: "*"
max.poll.records: 50kafkaTemplate.send). Consumer picks it up when ready (@KafkaListener). Idempotent consumer = processing the same letter twice gives the same result. At-least-once delivery = design for duplicates. Different groupId = different consumers EACH get their own copy."These patterns separate junior from senior in every interview. Learn the WHY first.
Problem: In microservices, you can't do BEGIN TRANSACTION across 3 different databases. If Order DB commits but Payment DB fails — you have a corrupt state.
// Step 1: Order Service creates order and publishes event kafkaTemplate.send("order-created", new OrderCreatedEvent(order.getId())); // Step 2: Payment Service listens and processes @KafkaListener(topics = "order-created") public void onOrderCreated(OrderCreatedEvent e) { try { paymentService.charge(e.getOrderId()); kafkaTemplate.send("payment-completed", e); // success } catch (Exception ex) { kafkaTemplate.send("payment-failed", e); // compensate! } } // Step 3: Order Service listens to payment-failed → compensate @KafkaListener(topics = "payment-failed") public void onPaymentFailed(OrderCreatedEvent e) { orderService.cancel(e.getOrderId()); // COMPENSATING TRANSACTION kafkaTemplate.send("order-cancelled", e); }
// Orchestrator knows the full flow. Easier to visualise and debug. @Service public class OrderSagaOrchestrator { public void startOrderSaga(Order order) { try { // Step 1 inventoryClient.reserve(order.getSkuCode(), order.getQty()); try { // Step 2 paymentClient.charge(order.getId(), order.getAmount()); try { // Step 3 notificationClient.sendConfirmation(order.getUserId()); orderService.markCompleted(order.getId()); } catch(Exception e3) { // Step 3 failed — compensate steps 1 and 2 paymentClient.refund(order.getId()); inventoryClient.release(order.getSkuCode()); orderService.markFailed(order.getId()); } } catch(Exception e2) { inventoryClient.release(order.getSkuCode()); // compensate step 1 } } catch(Exception e1) { orderService.markFailed(order.getId()); } } }
Problem: Your order history dashboard needs 5 JOINs across different tables. These complex reads are killing your write performance. CQRS: separate the write model (normalised, optimised for consistency) from the read model (denormalised, optimised for performance, possibly in Elasticsearch).
// COMMAND side — writes to MySQL (strong consistency) @RestController public class OrderCommandController { @PostMapping("/orders") public ResponseEntity<?> placeOrder(@RequestBody CreateOrderCommand cmd) { orderCommandService.handle(cmd); // writes to SQL DB kafkaTemplate.send("order-events", cmd); // publish for read model sync return ResponseEntity.accepted().build(); } } // QUERY side — reads from Elasticsearch (fast, denormalised) @RestController public class OrderQueryController { @GetMapping("/orders/{userId}/history") public List<OrderSummary> getHistory(@PathVariable String userId) { return orderQueryService.findByUser(userId); // reads Elasticsearch } }
Instead of storing current state (balance: ₹300), you store every event (AccountCreated, Deposited-500, Withdrew-200). To get current state → replay all events. Benefits: full audit trail, time-travel debugging, natural fit with CQRS. Works perfectly with Axon Framework in Spring Boot.
@Component public class JwtAuthFilter extends OncePerRequestFilter { @Override protected void doFilterInternal(HttpServletRequest req, ...) { String header = req.getHeader("Authorization"); if (header != null && header.startsWith("Bearer ")) { String token = header.substring(7); if (jwtUtil.validateToken(token)) { String userId = jwtUtil.extractUserId(token); List<GrantedAuthority> roles = jwtUtil.extractRoles(token); // ⚠️ CRITICAL: NEVER trust userId from request body! // Always extract from the VALIDATED JWT claims. // A malicious caller can put any userId in the body. var auth = new UsernamePasswordAuthenticationToken(userId, null, roles); SecurityContextHolder.getContext().setAuthentication(auth); } } filterChain.doFilter(req, res); } }
@Configuration public class SecurityConfig { @Bean SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .csrf(csrf -> csrf.disable()) .sessionManagement(s -> s.sessionCreationPolicy( SessionCreationPolicy.STATELESS)) // No sessions with JWT! .authorizeHttpRequests(auth -> auth .requestMatchers("/api/public/**").permitAll() .requestMatchers("/api/admin/**").hasRole("ADMIN") .anyRequest().authenticated()) .oauth2ResourceServer(oauth2 -> oauth2.jwt(jwt -> jwt.jwtAuthenticationConverter(jwtAuthenticationConverter()))); return http.build(); } } # application.yml — points to Keycloak / your Auth Server JWKS spring: security: oauth2: resourceserver: jwt: jwk-set-uri: http://keycloak:8080/realms/myapp/protocol/openid-connect/certs
// Service-to-service: use OAuth2 Client Credentials grant // No user. Service authenticates AS ITSELF to get a token. // Spring Security OAuth2 Client handles this automatically with WebClient @Bean public WebClient paymentWebClient(OAuth2AuthorizedClientManager clientManager) { ServletOAuth2AuthorizedClientExchangeFilterFunction oauth2 = new ServletOAuth2AuthorizedClientExchangeFilterFunction(clientManager); oauth2.setDefaultClientRegistrationId("payment-service"); return WebClient.builder() .baseUrl("http://payment-service") .apply(oauth2.oauth2Configuration()) .build(); } # application.yml — register as OAuth2 client spring: security: oauth2: client: registration: payment-service: authorization-grant-type: client_credentials client-id: order-service client-secret: ${ORDER_SERVICE_SECRET} provider: payment-service: token-uri: http://keycloak:8080/realms/myapp/protocol/openid-connect/token
You can't manage what you can't see. In a distributed system, observability is not optional — it's how you sleep at night.
What happened? Structured JSON logs with Trace IDs. Stack: ELK (Elasticsearch + Logstash + Kibana) or Grafana Loki. Search all service logs from one UI.
How is it performing? Micrometer → Prometheus → Grafana dashboards. CPU, heap, request rate, error rate, circuit breaker state, custom counters.
Why is it slow? Zipkin / Jaeger. One Trace ID follows a request across all services. Visualise: Gateway (200ms) → Order (50ms) → Payment (120ms). Instant bottleneck detection.
management:
endpoints:
web:
exposure:
include: health, info, metrics, prometheus, circuitbreakers
metrics:
tags:
application: ${spring.application.name} # tag all metrics with service name
tracing:
sampling:
probability: 1.0 # sample 100% in dev (0.1 in prod)
# Zipkin endpoint
management:
zipkin:
tracing:
endpoint: http://zipkin:9411/api/v2/spans
# Structured JSON logging for ELK
logging:
pattern:
console: "%d{yyyy-MM-dd HH:mm:ss} [%X{traceId}] %-5level %logger{36} - %msg%n"One walk through your city and you'll never forget the stack. Guaranteed.
The city Signboard. Find any building by name, not address. @EnableEurekaServer. @FeignClient(name="svc").
Airport Security. ONE entrance. Auth + Rate Limit + Route. lb://service-name = load-balanced via Eureka.
City Hall. All laws in one Git repo. @RefreshScope + POST /actuator/refresh = update without restart.
Electrical fuse + Ship compartments. CLOSED → OPEN → HALF-OPEN. Resilience4J not Hystrix.
Post Office. At-least-once delivery. Idempotent consumers. Different groupId = each consumer gets own copy.
CCTV network with timestamps. One TraceId follows the request across all services.
| Component | Tool | The Analogy |
|---|---|---|
| Service Registry | Netflix Eureka | Yellow Pages directory |
| API Gateway | Spring Cloud Gateway | Airport security + routing |
| Config Management | Spring Cloud Config + Git | City Hall (laws change once) |
| Circuit Breaker | Resilience4J | Electrical fuse |
| Bulkhead | Resilience4J ThreadPool | Ship compartments |
| Declarative HTTP | OpenFeign | Looks like a local method call |
| Async Messaging | Apache Kafka | Post office (fire and forget) |
| Auth Token | JWT + Keycloak | Hotel key card + passport |
| Distributed Tracing | Zipkin + Sleuth | Parcel tracking number |
| Metrics | Micrometer + Prometheus + Grafana | City health dashboard |
| Distributed Transactions | Saga Pattern (Axon) | Wedding booking chain |
| Read/Write Separation | CQRS | ER vs Diagnostic Lab |
The exact questions in senior Java interviews, with the depth of answer interviewers actually want. Click to reveal model answers.