Skip to main content

EXTENSIONS

Performance Testing

Transform your Karate functional tests into Gatling-powered performance tests that validate API correctness under load - no separate scripts needed.

Prerequisites

Before starting performance testing with Karate:

  • Working Karate functional tests
  • Java 11 or higher
  • Maven or Gradle build system
  • Basic understanding of load testing concepts (RPS, response time, percentiles)

Why Karate + Gatling?

Unique Advantages

  • Reuse functional tests - No need to rewrite scenarios for performance testing
  • Deep assertions under load - Validate response correctness, not just response codes
  • Single framework - API functional, performance, and UI testing in one tool
  • Maintainability - One set of tests for multiple purposes
  • Realistic scenarios - Use actual business workflows for load testing

Traditional vs Karate Approach

AspectTraditional Performance TestingKarate + Gatling
Test CreationSeparate load test scriptsReuse functional tests
ValidationBasic status code checksFull response validation
MaintenanceTwo sets of tests to maintainSingle test suite
Realistic ScenariosOften simplified for performanceReal business workflows
Learning CurveLearn Gatling DSL + API testingLearn Karate (covers both)
Key Benefits
  • Reuse 100% of functional tests for performance validation
  • Deep assertions verify correctness under load, not just response codes
  • Single codebase eliminates duplicate test maintenance
  • Realistic scenarios use actual business workflows

Quick Start

Maven Dependencies

Add Gatling support to your existing Karate project:

<properties>
<gatling.version>3.9.5</gatling.version>
<gatling-maven-plugin.version>4.3.7</gatling-maven-plugin.version>
<scala-maven-plugin.version>4.8.1</scala-maven-plugin.version>
</properties>

<dependencies>
<!-- Existing Karate dependency -->
<dependency>
<groupId>io.karatelabs</groupId>
<artifactId>karate-junit5</artifactId>
<version>1.5.1</version>
<scope>test</scope>
</dependency>

<!-- Gatling integration -->
<dependency>
<groupId>io.karatelabs</groupId>
<artifactId>karate-gatling</artifactId>
<version>1.5.1</version>
<scope>test</scope>
</dependency>

<!-- Gatling dependencies -->
<dependency>
<groupId>io.gatling.highcharts</groupId>
<artifactId>gatling-charts-highcharts</artifactId>
<version>${gatling.version}</version>
<scope>test</scope>
</dependency>
</dependencies>

<build>
<plugins>
<!-- Scala compilation for Gatling -->
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>${scala-maven-plugin.version}</version>
<executions>
<execution>
<goals>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
</plugin>

<!-- Gatling Maven plugin -->
<plugin>
<groupId>io.gatling</groupId>
<artifactId>gatling-maven-plugin</artifactId>
<version>${gatling-maven-plugin.version}</version>
<configuration>
<simulationClass>examples.LoadTest</simulationClass>
</configuration>
</plugin>
</plugins>
</build>

Gradle Configuration

plugins {
id 'scala'
id 'io.gatling.gradle' version '3.9.5.6'
}

dependencies {
// Existing Karate
testImplementation 'io.karatelabs:karate-junit5:1.5.1'

// Gatling integration
testImplementation 'io.karatelabs:karate-gatling:1.5.1'

// Gatling dependencies
gatling 'io.gatling.highcharts:gatling-charts-highcharts:3.9.5'
gatling 'io.gatling:gatling-app:3.9.5'
}

gatling {
simulations = {
include 'examples/LoadTest.scala'
}
}
Scala Requirement

Gatling simulations are written in Scala, but the Maven/Gradle plugins handle compilation automatically. You don't need to know Scala - the examples below are self-explanatory.

Development Workflow

Start by writing and validating functional tests first. Once your functional tests are stable and passing, convert them to performance tests. This ensures you're load testing correct behavior, not bugs.

Basic Performance Test

Your First Load Test

Create a simple Scala simulation that runs your Karate features under load:

// src/test/scala/examples/LoadTest.scala
package examples

import com.intuit.karate.gatling.PreDef._
import io.gatling.core.Predef._
import scala.concurrent.duration._

class LoadTest extends Simulation {

val protocol = karateProtocol()

// Define scenario using your Karate feature file
val loadTest = scenario("User API Load Test")
.exec(karateFeature("classpath:features/users/user-load-test.feature"))

setUp(
loadTest.inject(
rampUsers(100) during (30.seconds), // Ramp up 100 users over 30 seconds
constantUsersPerSec(10) during (60.seconds) // Then 10 users/sec for 60 seconds
).protocols(protocol)
).assertions(
global.responseTime.max.lt(2000), // Max response time < 2s
global.responseTime.percentile3.lt(1000), // 95th percentile < 1s
global.successfulRequests.percent.gt(95) // 95% success rate
)
}

Karate Feature for Performance Testing

Feature: User API Performance Test

Background:
* url baseUrl
* def authToken = karate.callSingle('classpath:auth/get-token.feature').token
* header Authorization = 'Bearer ' + authToken

Scenario: Get user profile
* def userId = Math.floor(Math.random() * 1000) + 1
Given path 'users', userId
When method get
Then status 200
And match response == { id: '#number', name: '#string', email: '#string' }
And assert responseTime < 500

Scenario: Search users
* def searchTerm = ['john', 'jane', 'alice', 'bob'][Math.floor(Math.random() * 4)]
Given path 'users/search'
And param q = searchTerm
And param limit = 10
When method get
Then status 200
And match response.users == '#[] #object'
And assert response.users.length <= 10
And assert responseTime < 1000

Advanced Load Testing Patterns

Multi-Scenario Load Test

Simulate realistic user distributions with multiple concurrent scenarios:

class AdvancedLoadTest extends Simulation {

val protocol = karateProtocol()

// Define different user behaviors
val browsingUsers = scenario("Browsing Users")
.exec(karateFeature("classpath:perf/browsing-flow.feature"))

val purchaseUsers = scenario("Purchase Users")
.exec(karateFeature("classpath:perf/purchase-flow.feature"))

val adminUsers = scenario("Admin Users")
.exec(karateFeature("classpath:perf/admin-flow.feature"))

setUp(
// Realistic user distribution: 70% browsing, 20% purchasing, 10% admin
browsingUsers.inject(
constantUsersPerSec(7) during (300.seconds) // 7 users/sec for 5 minutes
),
purchaseUsers.inject(
constantUsersPerSec(2) during (300.seconds) // 2 users/sec for 5 minutes
),
adminUsers.inject(
constantUsersPerSec(1) during (300.seconds) // 1 user/sec for 5 minutes
)
).protocols(protocol)
.assertions(
global.responseTime.percentile3.lt(2000), // 95th percentile < 2s
global.successfulRequests.percent.gt(99), // 99% success rate
forAll.responseTime.max.lt(5000) // Max response < 5s for all scenarios
)
}

Real User Journey Testing

Simple User Flow

Start with a basic shopping journey:

Feature: Simple shopping flow

Background:
* url baseUrl
* def authToken = karate.callSingle('classpath:auth/get-token.feature').token
* header Authorization = 'Bearer ' + authToken

Scenario: Browse and purchase
# Browse products
Given path 'products'
And param category = 'electronics'
When method get
Then status 200
And match response.products == '#[] #object'

# Purchase first product
* def product = response.products[0]
Given path 'orders'
And request { productId: product.id, quantity: 1 }
When method post
Then status 201
And match response.orderId == '#string'

Complete User Journey with Think Times

For realistic load testing, add user pauses between actions:

Scenario: Complete shopping journey
# Login
Given path 'auth/login'
And request { username: 'user@example.com', password: 'secret' }
When method post
Then status 200
* def authToken = response.token
* header Authorization = 'Bearer ' + authToken

# Browse products (1 second think time)
* karate.pause(1000)
Given path 'products'
And param category = 'electronics'
When method get
Then status 200
* def randomProduct = response.products[Math.floor(Math.random() * response.products.length)]

# View product details (2 seconds think time)
* karate.pause(2000)
Given path 'products', randomProduct.id
When method get
Then status 200

# Add to cart (1 second think time)
* karate.pause(1000)
Given path 'cart/items'
And request { productId: randomProduct.id, quantity: 1 }
When method post
Then status 201

# Checkout (3 seconds thinking about purchase)
* karate.pause(3000)
Given path 'orders'
And request { items: [{ productId: randomProduct.id, quantity: 1 }] }
When method post
Then status 201
And match response == { orderId: '#string', status: 'pending', total: '#number' }

Performance Test Execution

Running Performance Tests

# Compile and run performance tests
mvn clean test-compile gatling:test

# Run specific simulation
mvn gatling:test -Dgatling.simulationClass=examples.LoadTest

# Run with environment
mvn gatling:test -Dkarate.env=perf -Dgatling.simulationClass=examples.LoadTest

# Run with custom properties
mvn gatling:test \
-Dkarate.env=perf \
-Dapi.baseUrl=https://load-test.example.com \
-Dtarget.rps=100 \
-Dtest.duration=300

# Gradle execution
./gradlew gatlingRun-examples.LoadTest

CI/CD Integration

Schedule performance tests in your CI/CD pipeline:

# GitHub Actions example
name: Performance Tests
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM

jobs:
performance-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-java@v3
with:
java-version: '11'
- run: mvn clean test-compile gatling:test -Dkarate.env=perf
- uses: actions/upload-artifact@v3
with:
name: gatling-reports
path: target/gatling/

Advanced Performance Patterns

Custom User Simulation

class RealisticLoadTest extends Simulation {

val protocol = karateProtocol()

// Different user personas with different behaviors
val lightUsers = scenario("Light Users")
.exec(karateFeature("classpath:perf/light-usage.feature"))
.pause(10, 30) // 10-30 second pauses between requests

val heavyUsers = scenario("Heavy Users")
.exec(karateFeature("classpath:perf/heavy-usage.feature"))
.pause(2, 5) // 2-5 second pauses

val batchUsers = scenario("Batch API Users")
.exec(karateFeature("classpath:perf/batch-operations.feature"))
.pause(60, 120) // 1-2 minute pauses between batches

setUp(
// Realistic user distribution
lightUsers.inject(
rampUsers(50) during (60 seconds),
constantUsersPerSec(5) during (300 seconds)
),
heavyUsers.inject(
rampUsers(20) during (120 seconds),
constantUsersPerSec(2) during (300 seconds)
),
batchUsers.inject(
rampUsers(5) during (180 seconds),
constantUsersPerSec(0.5) during (300 seconds)
)
).protocols(protocol)
.assertions(
// SLA-based assertions
global.responseTime.percentile3.lt(2000), # 95th percentile < 2s
global.responseTime.percentile4.lt(5000), # 99th percentile < 5s
global.successfulRequests.percent.gt(99.5), # 99.5% success rate

// Per-scenario assertions
details("Light Users").responseTime.mean.lt(500),
details("Heavy Users").responseTime.mean.lt(1000),
details("Batch API Users").responseTime.mean.lt(10000)
)
}

Performance Test Data Management

For realistic load testing, generate test data upfront and reuse across scenarios:

@ignore
Feature: Generate performance test data

Scenario: Create test users
* def generateUsers = function(count) { /* generate user objects */ }
* def testUsers = generateUsers(1000)
* def results = call read('create-users-batch.feature') testUsers
* karate.write(results.userIds, 'target/perf-user-ids.json')

Use pre-generated data in performance tests:

Feature: User performance test

Background:
* url baseUrl
* def userIds = read('classpath:target/perf-user-ids.json')

Scenario: Random user operations
* def randomUserId = userIds[Math.floor(Math.random() * userIds.length)]
Given path 'users', randomUserId
When method get
Then status 200
And assert responseTime < 500

High-Performance Configuration

Optimizing for High RPS

For high requests-per-second (RPS) requirements, configure thread pools:

// karate-config.js for performance testing
function fn() {
var env = karate.env || 'dev';

var config = {
baseUrl: 'https://api.example.com',
};

if (env == 'perf') {
// Performance-specific configuration
config.baseUrl = 'https://load-test-api.example.com';

// Optimize for high throughput
karate.configure('connectTimeout', 5000);
karate.configure('readTimeout', 10000);
karate.configure('retry', { count: 1, interval: 1000 });

// Increase HTTP client thread pool for high RPS
karate.configure('httpClientClass', 'com.intuit.karate.http.ApacheHttpClient');
java.lang.System.setProperty('karate.http.client.pool.max', '1000');
java.lang.System.setProperty('karate.http.client.pool.route.max', '100');
}

return config;
}

Scala Simulation with Custom Settings

class HighThroughputTest extends Simulation {

val protocol = karateProtocol(
// Dynamic think times passed from Gatling to Karate
"/api/users/{id}" -> pauseFor("user-think-time"),
"/api/products" -> pauseFor("product-think-time"),
"/api/orders" -> pauseFor("order-think-time")
)

val userScenario = scenario("High Throughput Users")
.feed(Iterator.continually(Map(
"userId" -> (scala.util.Random.nextInt(10000) + 1), // Random user ID 1-10000
"user-think-time" -> (scala.util.Random.nextInt(3) + 1), // 1-3 seconds
"product-think-time" -> (scala.util.Random.nextInt(5) + 2), // 2-6 seconds
"order-think-time" -> (scala.util.Random.nextInt(10) + 5) // 5-14 seconds
)))
.exec(karateFeature("classpath:perf/high-throughput.feature"))

setUp(
userScenario.inject(
// Phase 1: Gradual ramp up to find baseline
rampUsers(10) during (30.seconds),
rampUsers(50) during (60.seconds),
rampUsers(100) during (120.seconds),

// Phase 2: Sustained load at target RPS
constantUsersPerSec(50) during (600.seconds),

// Phase 3: Peak load burst to test capacity
rampUsersPerSec(50) to (200) during (60.seconds),
constantUsersPerSec(200) during (120.seconds),
rampUsersPerSec(200) to (50) during (60.seconds)
).protocols(protocol)
).assertions(
// Performance SLA thresholds
global.responseTime.mean.lt(1000), // Average < 1s
global.responseTime.percentile3.lt(2000), // 95th percentile < 2s
global.responseTime.percentile4.lt(5000), // 99th percentile < 5s
global.responseTime.max.lt(10000), // Max < 10s
global.successfulRequests.percent.gt(99.5), // 99.5% success rate
global.requestsPerSec.gte(45) // Minimum 45 RPS sustained
).maxDuration(15.minutes)
}

Stress Testing

Breaking Point Analysis

class StressTest extends Simulation {

val protocol = karateProtocol()

val stressScenario = scenario("Stress Test")
.exec(karateFeature("classpath:perf/stress-test.feature"))

setUp(
stressScenario.inject(
// Find breaking point
incrementUsersPerSec(5)
.times(10)
.eachLevelLasting(60 seconds)
.separatedByRampsLasting(30 seconds)
.startingFrom(10 usersPerSec)
).protocols(protocol)
).assertions(
// Allow some failures as we find breaking point
global.successfulRequests.percent.gt(90),
global.responseTime.percentile3.lt(5000)
)
}

Spike Testing

class SpikeTest extends Simulation {

val protocol = karateProtocol()

val normalLoad = scenario("Normal Load")
.exec(karateFeature("classpath:perf/normal-operations.feature"))

val spikeLoad = scenario("Spike Load")
.exec(karateFeature("classpath:perf/spike-operations.feature"))

setUp(
// Background normal load
normalLoad.inject(
constantUsersPerSec(10) during (600 seconds)
),

// Spike scenarios
spikeLoad.inject(
nothingFor(120 seconds),
rampUsers(200) during (30 seconds), # Sudden spike
constantUsersPerSec(100) during (60 seconds),
rampUsersPerSec(100) to (0) during (30 seconds)
)
).protocols(protocol)
.assertions(
// System should recover gracefully
global.responseTime.percentile3.lt(3000),
global.successfulRequests.percent.gt(95),
details("Normal Load").successfulRequests.percent.gt(99)
)
}

Performance Monitoring and Analysis

Custom Metrics in Karate Features

Feature: Performance test with custom metrics

Background:
* def performanceMetrics = {}
* configure afterScenario =
"""
function() {
var scenarioName = karate.info.scenarioName;
var responseTime = karate.get('responseTime');
var status = karate.get('responseStatus');

if (!performanceMetrics[scenarioName]) {
performanceMetrics[scenarioName] = {
count: 0,
totalTime: 0,
failures: 0
};
}

performanceMetrics[scenarioName].count++;
performanceMetrics[scenarioName].totalTime += responseTime;
if (status >= 400) {
performanceMetrics[scenarioName].failures++;
}
}
"""

Scenario: API endpoint performance
Given path 'api/heavy-computation'
And param complexity = 'medium'
When method post
Then status 200
And assert responseTime < 2000

# Custom performance validation
* if (responseTime > 1500) karate.log('SLOW RESPONSE:', responseTime, 'ms')
* if (responseTime < 100) karate.log('VERY FAST:', responseTime, 'ms')

Database Performance Testing

# perf/database-performance.feature
Feature: Database-intensive operations performance

Background:
* url baseUrl
* def authToken = karate.callSingle('classpath:auth/admin-token.feature').token
* header Authorization = 'Bearer ' + authToken

Scenario: Database query performance
# Complex query that hits database
Given path 'reports/analytics'
And param startDate = '2024-01-01'
And param endDate = '2024-12-31'
And param groupBy = 'month'
And param includeDetails = true
When method get
Then status 200

# Validate response structure under load
And match response == {
summary: '#object',
data: '#[] #object',
metadata: {
queryTime: '#number',
recordCount: '#number',
fromCache: '#boolean'
}
}

# Performance assertions
And assert responseTime < 5000 # Database queries can be slower
And assert response.metadata.recordCount > 0

# Log performance for analysis
* karate.log('Query performance:', {
responseTime: responseTime,
recordCount: response.metadata.recordCount,
fromCache: response.metadata.fromCache,
queryTime: response.metadata.queryTime
})

Report Analysis

Gatling Report Structure

After running performance tests, Gatling generates comprehensive reports:

target/gatling/
├── loadtest-20240115-143052/
│ ├── index.html # Main report
│ ├── global_stats.json # Performance statistics
│ ├── simulation.log # Raw execution data
│ └── js/ # Interactive charts
└── lastRun.txt # Latest run identifier

Custom Report Enhancement

// Custom reporter for additional metrics
class CustomReportTest extends Simulation {

val protocol = karateProtocol()

val testScenario = scenario("API Test")
.exec(karateFeature("classpath:perf/api-test.feature"))

setUp(
testScenario.inject(constantUsersPerSec(10) during (60 seconds))
).protocols(protocol)
.assertions(
global.responseTime.percentile3.lt(1000)
)

// Custom post-simulation processing
after {
println("=== CUSTOM PERFORMANCE ANALYSIS ===")

// Read simulation log for custom analysis
val logFile = new java.io.File("target/gatling/simulation.log")
if (logFile.exists()) {
// Process log file for custom metrics
// Add custom analysis logic here
}
}
}

Best Practices

1. Performance Test Design

# ✅ Good: Realistic user scenarios
Feature: Realistic e-commerce performance test

Scenario: Typical shopping journey
# Browse → Search → View → Add to Cart → Checkout
# With realistic think times between actions

# ✅ Good: Data-driven with realistic variations
* def products = ['laptop', 'phone', 'tablet', 'headphones']
* def categories = ['electronics', 'computers', 'mobile', 'accessories']
* def randomProduct = products[Math.floor(Math.random() * products.length)]

# ❌ Avoid: Unrealistic constant hammering
Feature: Unrealistic performance test
Scenario: Constant API calls
# Making 100 requests per second with no think time

2. Performance Assertions

# ✅ Good: Meaningful performance validations
Scenario: Performance with business validation
Given path 'orders'
When method get
Then status 200

# Performance assertions
And assert responseTime < 1000

# Business logic assertions under load
And match response.orders == '#[] #object'
And assert response.orders.length > 0
And match each response.orders == {
id: '#string',
total: '#? _ > 0',
status: '#? ["pending", "confirmed", "shipped"].includes(_)'
}

# Validate data quality under load
* def orderTotals = response.orders.map(o => o.total)
* def avgOrderValue = orderTotals.reduce((sum, total) => sum + total, 0) / orderTotals.length
* assert avgOrderValue > 0

3. Environment Configuration

// Performance environment configuration
function fn() {
var env = karate.env || 'dev';

var config = {};

if (env == 'perf') {
config.baseUrl = 'https://load-test.example.com';
config.authUrl = 'https://auth-load-test.example.com';

// Performance optimizations
karate.configure('connectTimeout', 3000);
karate.configure('readTimeout', 15000);
karate.configure('retry', { count: 2, interval: 500 });

// Reduce logging for performance
karate.configure('logPrettyRequest', false);
karate.configure('logPrettyResponse', false);

// Custom headers for load testing
karate.configure('headers', { 'X-Load-Test': 'true' });
}

return config;
}

Monitoring and Alerting

Performance Thresholds

class MonitoredLoadTest extends Simulation {

val protocol = karateProtocol()

val monitoredScenario = scenario("Monitored Load Test")
.exec(karateFeature("classpath:perf/monitored-test.feature"))

setUp(
monitoredScenario.inject(constantUsersPerSec(25) during (300 seconds))
).protocols(protocol)
.assertions(
// Critical thresholds - fail build if violated
global.responseTime.percentile3.lt(2000),
global.successfulRequests.percent.gt(99),

// Warning thresholds - log but don't fail
global.responseTime.mean.lt(500),
global.responseTime.percentile2.lt(1000)
)
}

Integration with Monitoring Systems

Extract and send metrics to your monitoring platform:

# Parse Gatling results and send to monitoring
REPORT_DIR=$(find target/gatling -type d -name "*$(date +%Y%m%d)*" | head -1)
MEAN_RT=$(jq '.meanResponseTime.total' "$REPORT_DIR/js/global_stats.json")
P95_RT=$(jq '.percentiles3.total' "$REPORT_DIR/js/global_stats.json")

# Send to your monitoring system (Datadog, New Relic, etc.)
curl -X POST https://monitoring.example.com/metrics \
-d "{\"meanResponseTime\": $MEAN_RT, \"p95ResponseTime\": $P95_RT}"

Troubleshooting Performance Tests

Common Performance Issues

IssueSymptomsSolution
Low RPS capabilityCan't achieve target loadIncrease HTTP client thread pool
Memory issuesOutOfMemoryError during testsIncrease heap size, optimize data usage
Connection timeoutsHigh failure rateAdjust timeout settings
Inconsistent resultsHighly variable response timesUse proper think times, check test environment

Debug Performance Issues

Feature: Debug slow endpoints

Background:
* configure logPrettyRequest = true

Scenario: Diagnose slow response
* def startTime = new Date().getTime()
Given url debugUrl
And path 'slow-endpoint'
When method get
Then status 200

* def totalTime = new Date().getTime() - startTime
* print 'Network:', responseTime, 'ms | Total:', totalTime, 'ms'
* if (responseTime > 2000) karate.log('SLOW:', responseBytes.length, 'bytes')

Next Steps

Master performance testing with Karate:

  1. Test Doubles - Create mock services for performance testing
  2. UI Testing - Add browser performance testing
  3. Advanced Configuration - Optimize for scale
  4. Parallel Execution - Maximize test performance

Ready to create mock services? Explore Test Doubles for API virtualization and contract testing.