Kotlin Extensions for LangChain4j
Table of Contents
I am excited to announce Kotlin extensions for LangChain4j!
It is transforming synchronous LangChain4j’s API into a modern, non-blocking Kotlin experience with Coroutines support. Additionally, it addresses some missing LangChain4J features, like advanced prompt template management.
Key Features #
- โจ Kotlin Coroutine support for ChatLanguageModels
- ๐ Kotlin Flow support for StreamingChatLanguageModels
- ๐ External Prompt Templates with customizable sources
- ๐พ Non-Blocking Document Processing with Kotlin Coroutines
Non-Blocking Kotlin Coroutines API #
Using Kotlin Coroutines provides a lot of benefits:
- ๐งต Thread efficiency: Handle thousands of concurrent AI requests without thread exhaustion
- ๐ Easy cancellation: Leverage structured concurrency for reliable cleanup
- ๐ Better scalability: Non-blocking operations improve resource utilization
- ๐ป Idiomatic Kotlin: Seamless integration with coroutine-based code
- ๐ฟCompatibility with legacy JVMs: Kotlin Coroutines has been here for a while, since Kotlin 1.3 (October 2018). Kotlin 2.0, which is used by this project, supports Java 11 and above.
Let’s explore the differences between traditional LangChain4j API and Kotlin Extensions.
Chat Language Models #
Traditional approach:
// Blocking calls that tie up threads
val response = model.chat(request) // blocking thread
println(response.content())
With Kotlin Coroutines:
// Non-blocking coroutines with structured concurrency
launch {
val response = model.chatAsync(request) // suspend function
println(response.content())
}
Streaming Responses #
The extension converts StreamingChatLanguageModel response into Kotlin Asynchronous Flow:
val model: StreamingChatLanguageModel = OpenAiStreamingChatModel.builder()
.apiKey("your-api-key")
// more configuration parameters here ...
.build()
model.generateFlow(messages).collect { reply ->
when (reply) {
is Completion ->
println(
"Final response: ${reply.response.content().text()}",
)
is Token -> println("Received token: ${reply.token}")
else -> throw IllegalArgumentException("Unsupported event: $reply")
}
}
Document Processing #
Single Document #
suspend fun loadDocument() {
val source = FileSystemSource(Paths.get("path/to/document.txt"))
val document = loadAsync(source, TextDocumentParser())
println(document.text())
}
Parallel Processing #
suspend fun loadDocuments() {
try {
// Get all files from directory
val paths = Files.walk(Paths.get("./data"))
.filter(Files::isRegularFile)
.toList()
// Configure parallel processing
val ioScope = Dispatchers.IO.limitedParallelism(8)
val documentParser = TextDocumentParser()
// Process files in parallel
val documents = paths
.map { path ->
async {
try {
loadAsync(
source = FileSystemSource(path),
parser = documentParser,
dispatcher = ioScope,
)
} catch (e: Exception) {
logger.error("Failed to load document: $path", e)
null
}
}
}
.awaitAll()
.filterNotNull()
// Process loaded documents
documents.forEach { doc -> println(doc.text()) }
} catch (e: Exception) {
logger.error("Failed to process documents", e)
throw e
}
}
DocumentParser async API #
suspend fun parseInputStream(input: InputStream) {
input.use { stream -> // Automatically close stream
val document = TextDocumentParser().parseAsync(stream)
println(document.text()) // suspending function
}
}
Prompt Templates #
Customize AI interactions with flexible prompt templates.
Basic Usage #
- Define templates in classpath:
prompts/system.mustache
:
You are helpful assistant using chatMemoryID={{chatMemoryID}}
prompts/user.mustache
:
Hello, {{userName}}! {{message}}
- Use templates with LangChain4j:
interface Assistant {
@UserMessage("prompts/user.mustache")
fun askQuestion(
@UserName userName: String,
@V("message") question: String
): String
}
val assistant = AiServices
.builder(Assistant::class.java)
.systemMessageProvider(
TemplateSystemMessageProvider("prompts/system.mustache")
)
.chatLanguageModel(model)
.build()
val response = assistant.askQuestion(
userName = "Friend",
question = "How are you?"
)
Customization #
Configure via langchain4j-kotlin.properties
:
# Custom template source
prompt.template.source=com.example.CustomTemplateSource
# Custom template renderer
prompt.template.renderer=com.example.CustomRenderer
You may provide your own PromptTemplateSource and TemplateRenderer implementations.
Links #
For more details see documentation.
Try it out: GitHub