Kotlin extensions for LangChain4j

Kotlin extensions for LangChain4j

I am excited to announce Kotlin extensions for LangChain4j!

It is transforming synchronous LangChain4j’s API into a modern, non-blocking Kotlin experience with Coroutines support. Additionally, it addresses some missing LangChain4J features, like advanced prompt template management.

Key Features

  • โœจ Kotlin Coroutine support for ChatLanguageModels
  • ๐ŸŒŠ Kotlin Flow support for StreamingChatLanguageModels
  • ๐Ÿ’„ External Prompt Templates with customizable sources
  • ๐Ÿ’พ Non-Blocking Document Processing with Kotlin Coroutines

Non-Blocking Kotlin Coroutines API

Using Kotlin Coroutines provides a lot of benefits:

  • ๐Ÿงต Thread efficiency: Handle thousands of concurrent AI requests without thread exhaustion
  • ๐Ÿ”€ Easy cancellation: Leverage structured concurrency for reliable cleanup
  • ๐Ÿ“ˆ Better scalability: Non-blocking operations improve resource utilization
  • ๐Ÿ’ป Idiomatic Kotlin: Seamless integration with coroutine-based code
  • ๐Ÿ—ฟCompatibility with legacy JVMs: Kotlin Coroutines has been here for a while, since Kotlin 1.3 (October 2018). Kotlin 2.0, which is used by this project, supports Java 11 and above.

Let’s explore the differences between traditional LangChain4j API and Kotlin Extensions.

Chat Language Models

Traditional approach:

kotlin
1// Blocking calls that tie up threads
2val response = model.chat(request) // blocking thread
3println(response.content())

With Kotlin Coroutines:

kotlin
1// Non-blocking coroutines with structured concurrency
2launch {
3    val response = model.chatAsync(request) // suspend function
4    println(response.content())
5}

Streaming Responses

The extension converts StreamingChatLanguageModel response into Kotlin Asynchronous Flow:

kotlin
 1val model: StreamingChatLanguageModel = OpenAiStreamingChatModel.builder()
 2    .apiKey("your-api-key")
 3    // more configuration parameters here ...
 4    .build()
 5
 6model.generateFlow(messages).collect { reply ->
 7    when (reply) {
 8        is Completion ->
 9            println(
10                "Final response: ${reply.response.content().text()}",
11            )
12
13        is Token -> println("Received token: ${reply.token}")
14        else -> throw IllegalArgumentException("Unsupported event: $reply")
15    }
16}

Document Processing

Single Document

kotlin
1suspend fun loadDocument() {
2    val source = FileSystemSource(Paths.get("path/to/document.txt"))
3    val document = loadAsync(source, TextDocumentParser())
4    println(document.text())
5}

Parallel Processing

kotlin
 1suspend fun loadDocuments() {
 2    try {
 3        // Get all files from directory
 4        val paths = Files.walk(Paths.get("./data"))
 5            .filter(Files::isRegularFile)
 6            .toList()
 7
 8        // Configure parallel processing
 9        val ioScope = Dispatchers.IO.limitedParallelism(8)
10        val documentParser = TextDocumentParser()
11        
12        // Process files in parallel
13        val documents = paths
14            .map { path ->
15                async {
16                    try {
17                        loadAsync(
18                            source = FileSystemSource(path),
19                            parser = documentParser,
20                            dispatcher = ioScope,
21                        )
22                    } catch (e: Exception) {
23                        logger.error("Failed to load document: $path", e)
24                        null
25                    }
26                }
27            }
28            .awaitAll()
29            .filterNotNull()
30
31        // Process loaded documents
32        documents.forEach { doc -> println(doc.text()) }
33    } catch (e: Exception) {
34        logger.error("Failed to process documents", e)
35        throw e
36    }
37}

DocumentParser async API

kotlin
1suspend fun parseInputStream(input: InputStream) {
2    input.use { stream -> // Automatically close stream
3        val document = TextDocumentParser().parseAsync(stream) 
4        println(document.text()) // suspending function
5    }
6}

Prompt Templates

Customize AI interactions with flexible prompt templates.

Basic Usage

  1. Define templates in classpath:

prompts/system.mustache:

mustache
1You are helpful assistant using chatMemoryID={{chatMemoryID}}

prompts/user.mustache:

mustache
1Hello, {{userName}}! {{message}}
  1. Use templates with LangChain4j:
kotlin
 1interface Assistant {
 2    @UserMessage("prompts/user.mustache")
 3    fun askQuestion(
 4        @UserName userName: String,
 5        @V("message") question: String
 6    ): String
 7}
 8
 9val assistant = AiServices
10    .builder(Assistant::class.java)
11    .systemMessageProvider(
12        TemplateSystemMessageProvider("prompts/system.mustache")
13    )
14    .chatLanguageModel(model)
15    .build()
16
17val response = assistant.askQuestion(
18    userName = "Friend",
19    question = "How are you?"
20)

Customization

Configure via langchain4j-kotlin.properties:

properties
1# Custom template source
2prompt.template.source=com.example.CustomTemplateSource
3
4# Custom template renderer
5prompt.template.renderer=com.example.CustomRenderer

You may provide your own PromptTemplateSource and TemplateRenderer implementations.

For more details see documentation.

Try it out: GitHub

Konstantin Pavlov

Konstantin Pavlov

Software Engineer working with Java, Kotlin, Swift, and AI. Focusing on software architecture and building AI-infused apps. Passionate about testing and Open-Source projects.