The general availability of Vertex AI in Firebase was announced recently and in this article we’ll show how to use the associated Android and iOS SDKs in a Compose/Kotlin Multiplatform project. The code shown is included in the VertexAI-KMP-Sample repository.
Vertex AI in Firebase is now generally available.
— Firebase (@Firebase) October 21, 2024
You can confidently use our Kotlin, Swift, Web and Flutter SDKs to release your AI features into production, knowing they're backed by Google Cloud and Vertex AI quality standards.
Discover more ↓ https://t.co/SoOfAT9SOz
Setup
We initially performed the following steps
- Created a new Compose/Kotlin multiplatform project using the Kotlin Multiplatform Wizard (with “Share UI” option enabled).
- Created a new Firebase project and enabled use of Vertex AI for that project.
- Added Android and iOS apps in the Firebase console. The associated
google-services.json
andGoogleService-Info.plist
files were downloaded then and added to the Android and iOS projects.
Shared KMP code setup
We’re making use of the following libraries in shared code and made related changes shown below to the build config for the KMP module.
- Firebase (for the Vertex AI APIs)
- Kotlinx Serialization (for parsing Vertex json response)
- Markdown (for displaying Vertex markdown response)
- Koin (dependency injection along with support for KMP ViewModel)
libs.version.toml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[versions]
...
firebaseBom = "33.5.1"
koin = "4.0.0"
kotlinx-serialization = "1.7.3"
markdownRenderer = "0.26.0"
[libraries]
...
firebase-bom = { module = "com.google.firebase:firebase-bom", version.ref = "firebaseBom" }
firebase-vertexai = { module = "com.google.firebase:firebase-vertexai" }
koin-core = { module = "io.insert-koin:koin-core", version.ref = "koin" }
koin-compose-viewmodel = { module = "io.insert-koin:koin-compose-viewmodel", version.ref = "koin" }
kotlinx-serialization = { group = "org.jetbrains.kotlinx", name = "kotlinx-serialization-core", version.ref = "kotlinx-serialization" }
kotlinx-serialization-json = { group = "org.jetbrains.kotlinx", name = "kotlinx-serialization-json", version.ref = "kotlinx-serialization" }
markdown-renderer = { module = "com.mikepenz:multiplatform-markdown-renderer-m3", version.ref = "markdownRenderer" }
[plugins]
...
googleServices = { id = "com.google.gms.google-services", version.ref = "googleServices" }
kotlinxSerialization = { id = "org.jetbrains.kotlin.plugin.serialization", version.ref = "kotlin" }
build.gradle.kts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
plugins {
...
alias(libs.plugins.kotlinxSerialization)
alias(libs.plugins.googleServices)
}
androidMain.dependencies {
...
implementation(project.dependencies.platform(libs.firebase.bom))
implementation(libs.firebase.vertexai)
}
commonMain.dependencies {
...
implementation(libs.kotlinx.serialization)
implementation(libs.kotlinx.serialization.json)
implementation(libs.markdown.renderer)
implementation(libs.koin.core)
implementation(libs.koin.compose.viewmodel)
}
Android client setup
We initialise Firebase in our main Android application class.
build.gradle.kts
1
2
3
4
5
6
class VertexAIKMPApp : Application() {
override fun onCreate() {
super.onCreate()
FirebaseApp.initializeApp(this)
}
}
iOS client setup
The Firebase Swift Package was added to the iOS project and related initialisation done in App init as shown below.
1
2
3
4
5
6
7
8
9
10
11
struct iOSApp: App {
init() {
FirebaseApp.configure()
}
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
Using Vertex AI
We’re making use of 2 Vertex AI features in this sample (illustrated in the screenshots below)
- text generation with markdown response (rendered using
Markdown
CMP library) - structured json generation (with custom rendering of the result)
To support Android and iOS specific implementations of this functionality we created the following interface in shared (commonMain
code)
GenerativeModel.kt
1
2
3
4
interface GenerativeModel {
suspend fun generateTextContent(prompt: String): String?
suspend fun generateJsonContent(prompt: String): String?
}
This is implemented on Android as follows (in androidMain
source set in shared KMP module) where we make use of the Vertex AI Android SDK. We’re also defining the schema here that Vertex will use when generating the json response.
For this sample we’re using a specific schema that supports a range of prompts that return a list of people (for example as shown in screenshots above). The response is parsed using the Kotlinx Serializaton library and rendered as a simple list in our shared Compose Multiplatform UI code.
GenerativeModel.android.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
class GenerativeModelAndroid: GenerativeModel {
private val jsonSchema = Schema.array(
Schema.obj(
mapOf(
"name" to Schema.string(),
"country" to Schema.string()
)
)
)
override suspend fun generateTextContent(prompt: String): String? {
val generativeModel = Firebase.vertexAI.generativeModel(
modelName = "gemini-1.5-flash"
)
return generativeModel.generateContent(prompt).text
}
override suspend fun generateJsonContent(prompt: String): String? {
val generativeModel = Firebase.vertexAI.generativeModel(
modelName = "gemini-1.5-flash",
generationConfig = generationConfig {
responseMimeType = "application/json"
responseSchema = jsonSchema
}
)
return generativeModel.generateContent(prompt).text
}
}
On iOS this is implemented in the following Swift code (using the Vertex AI iOS SDK in this case). We’re also defining the above mentioned schema here. A future enhancement would be to define this schema in some generic format in shared code and then translate that in the Android and iOS implementations.
GenerativeModelIOS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
class GenerativeModelIOS: ComposeApp.GenerativeModel {
static let shared = GenerativeModelIOS()
let vertex = VertexAI.vertexAI()
let jsonSchema = Schema.array(
items: .object(
properties: [
"name": .string(),
"country": .string()
]
)
)
func generateTextContent(prompt: String) async throws -> String? {
let model = vertex.generativeModel(
modelName: "gemini-1.5-flash"
)
return try await model.generateContent(prompt).text
}
func generateJsonContent(prompt: String) async throws -> String? {
let model = vertex.generativeModel(
modelName: "gemini-1.5-flash",
generationConfig: GenerationConfig(
responseMIMEType: "application/json",
responseSchema: jsonSchema
)
)
return try await model.generateContent(prompt).text
}
}
That iOS implementation of GenerativeModel
is passed down to shared code when invoking MainViewController
(which wraps our shared Compose Multiplatform UI code).
1
2
3
4
5
6
7
8
9
10
11
12
13
struct ComposeView: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> UIViewController {
MainViewControllerKt.MainViewController(generativeModel: GenerativeModelIOS.shared)
}
func updateUIViewController(_ uiViewController: UIViewController, context: Context) {}
}
struct ContentView: View {
var body: some View {
ComposeView().ignoresSafeArea()
}
}
In MainViewController
we store the instance of GenerativeModelIOS
that the iOS client passed down…
MainViewController.kt
1
2
3
4
fun MainViewController(generativeModel: GenerativeModel) = ComposeUIViewController() {
generativeModelIOS = generativeModel
App()
}
…and use then in a Kotlin wrapper that’s defined in iosMain
in the shared KMP code.
GenerativeModel.ios.kt
1
2
3
4
5
6
7
8
9
class GenerativeModelIOS: GenerativeModel {
override suspend fun generateTextContent(prompt: String): String? {
return generativeModelIOS?.generateTextContent(prompt)
}
override suspend fun generateJsonContent(prompt: String): String? {
return generativeModelIOS?.generateJsonContent(prompt)
}
}
The specific instances of GenerativeModel
are created in platform specific Koin modules. Koin will also inject the appropriate implementation in to our shared view model.
Koin.android.kt
1
2
3
actual fun platformModule(): Module = module {
single<GenerativeModel> { GenerativeModelAndroid() }
}
Koin.ios.kt
1
2
3
actual fun platformModule(): Module = module {
single<GenerativeModel> { GenerativeModelIOS() }
}
Shared ViewModel
We invoke the above APIs in the following shared view model (we’re using the KMP Jetpack ViewModel library here). That includes the generateContent
function which the Compose UI code calls with the text prompt and a flag indicating whether to generate a json response or not. If generateJson
is set we also parse the response and return the structured data to the UI code which renders as a basic list (as shown in the screenshots above).
GenerativeModelViewModel.kt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class GenerativeModelViewModel(private val generativeModel: GenerativeModel) : ViewModel() {
val uiState = MutableStateFlow<GenerativeModelUIState>(GenerativeModelUIState.Initial)
fun generateContent(prompt: String, generateJson: Boolean) {
uiState.value = GenerativeModelUIState.Loading
viewModelScope.launch {
try {
uiState.value = if (generateJson) {
val response = generativeModel.generateJsonContent(prompt)
if (response != null) {
val entities = Json.decodeFromString<List<Entity>>(response)
GenerativeModelUIState.Success(entityContent = entities)
} else {
GenerativeModelUIState.Error("Error generating content")
}
} else {
val response = generativeModel.generateTextContent(prompt)
GenerativeModelUIState.Success(textContent = response)
}
} catch (e: Exception) {
GenerativeModelUIState.Error(e.message ?: "Error generating content")
}
}
}
}
Shared Compose Multiplatform UI code
Finally, the following is taken from the shared Compose UI code that’s used to render the response.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
is GenerativeModelUIState.Success -> {
if (uiState.entityContent != null) {
LazyColumn {
items(uiState.entityContent) { item ->
ListItem(
headlineContent = { Text(item.name)},
supportingContent = { Text(item.country) }
)
}
}
} else if (uiState.textContent != null) {
Markdown(uiState.textContent)
}
}
Featured in Android Weekly #647 and Kotlin Weekly Issue #432
Related tweet
Using Vertex AI in a Compose/Kotlin Multiplatform project https://t.co/k6P1cO5zly
— John O'Reilly (@joreilly) October 27, 2024