Categories
developer documentation v0.0.27
mimik Developer Documentation
  • Tutorials
  • Using mimik ai for all iOS AI applications

Using mimik ai for all iOS AI applications

Objective

The objective of this article is to demonstrate how to integrate AI components, including language models, to the iOS application environment, using the mimik Client Library API.

Intended Readers

The intended readers of this document are iOS software developers, who want to familiarize themselves with how the mimik Client Library interfaces with AI.

What You'll Be Doing

Learning about topics relevant to working with the mimik Client Library AI interfaces, such as:

  • Integrating AI components
  • Configuring AI language model source
  • Downloading AI language model
  • Referencing downloaded AI language model
  • Chatting with downloaded AI language model
  • Processing AI language model chat stream responses
  • Integrating and Downloading AI in one go

Prerequisites

  • Connecting a real iOS device to the development computer and selecting it as the target in Xcode. This tutorial will not work with an iOS Simulator.
  • A familiarity with the mimik Client Library components as described in this article.
  • An understanding of the mimik Client Library integration and initialization process as layed out here.
  • An understanding of how to start mim OE Runtime.
  • An understanding of how to generate an Access Token.
NOTE:

Working with the iOS Simulator and the mimik Client Library entails some special consideration. For more information about iOS Simulator support see this tutorial.

Overview

Our example AI use case consist of the following components:

As a package deployed via the mimik Client Library to your iOS application environment, mimik AI components ad a simple, yet powerful interface to the world of AI language models.

So, in order to get your application ready to start communicating with AI language models, we need to deploy the mimik ai use case in its environment.

To simplify the work, we have prepared a configuration json file, that describes the way the mimik ai use case package needs to be deployed, by way of the deploy use case method on the mimik Client Library. This adds everything needed for your application to be able to start communicating with AI language models.

In this tutorial, we won't cover how to integrate and initialize the mimik Client Library into an iOS project. This is covered in the following tutorial.

Similarly, starting a mim OE Runtime instance and getting an Access Token for it, is something that is already covered in its own tutorial.

So, let's begin!

Integrating AI components

We start by gathering a few configuration values.

The API key, is like a password that you choose yourself. It will be used to secure the API communication channels between the mimik ai use case components and your application environment.

Next, we gather the url to the use case configuration json, which was pre-configured to make the deployment easier for developers. It instructs the mimik Client Library to download the mILM edge microservice and configure it a specific way. It also informs the mimik Client Library about the available RESTful endpoints and their configurations on the edge microservice.

With that, we have all the information needed to call the deploy use case method on the mimik Client Library, passing the accessToken (passed-through to our method), apiKey and configUrl values as parameters.

Notice that we intentionally ommited the model parameter. This is because we want to demonstrate the downloading of the AI language model in a separate step. Though technically, this could have been done as one step, as will be demonstrated later in this tutorial, we wanted to separate the areas of concern for easier readability. This also being the reason, why there is no code in the downloadHandler.

Next, we need to provide code in two more handlers.

First is the requestHandler, which provides a reference to your AI integration call request. We save the reference, so that we might called on in later, if needed. For example to cancel it, or to inquire about its state.

Further more, we need to provide code in the final completionHandler. This is where we'll get the final result of the integration call, once it fully concludes.

So we validate the call result on its conclusion. If its successful, we return a success with the reference to the deployed use case. If there is an issue, we return a failure.

Keeping the reference to the deployed mimik ai use case is important, since we'll need it when making chat requests to the AI language model. We can either keep the reference in memory, like in our example below, for for simple example code, or you could save it somewhere for permanent, such as a decodable object.

We also want to clear the request reference, that we saved in the requestHandler, once the call concludes, whether successfully or not.

As will be the case with all example code in this tutorial, a heavily commented code will follow the detailed descriptions for each topic.


1: func integrateAI(accessToken: String) async -> Result<EdgeClient.UseCase.Deployment, NSError> {
2:
3: // Your API key, used to secure the API communication channels between the mimik ai use case and the application environment.
4: let apiKey = "1234-5678-910A"
5:
6: // A url to the mimik ai use case configuration json. It is pre-configured with values for this example
7: let configUrl = "https://github.com/mimikgit/cocoapod-mimOE-SE-iOS-developer/releases/download/5.6.2/mimik-ai-use-case-config.json"
8:
9: // Calling mimik Client Library to integrate the mimik ai use case from a configuration url. We'll intentionally leave the AI language model download to a separate call
10: switch await self.edgeClient.integrateAI(accessToken: accessToken, apiKey: apiKey, configUrl: configUrl, model: nil, downloadHandler: { _ in
11: // Not downloading the AI language model in this example method, so there is no need for any download handler code
12: }, requestHandler: { request in
13: // Keeping the reference to the AI integration request request, in case we want to examine its state or cancel it before it ends.
14: activeStream = request
15: }) {
16:
17: // Validating the result of the AI integration request
18: case .success(let result):
19: // Clearing out the AI integration request reference
20: activeStream = nil
21: print("AI integration call successful")
22: // AI integration request successful, returning a success with the deployed mimik ai use case reference
23: return .success(result)
24:
25: case .failure(let error):
26: print("AI integration call unsuccessful", error.localizedDescription)
27: // Clearing out the AI integration request reference
28: activeStream = nil
29: // AI integration request unsuccessful, returning a failure
30: return .failure(error)
31: }
32: }

With that, with have the mimik ai use case deployed. Next, we'll start looking at downloading AI language models.

Configuring AI language model source

Before we can start downloading AI language models locally, to the user's device, we need to do a bit of prep work.

First, we need to decide which AI language model to download. To simplify, we have prepared an example definition of a third-party model. Essentially, you can work with any AI language model, that fits within the hardware and software capabilities of your iOS device.

In the example code below, we have placed a pre-configured definition of an AI language model to a json string.

We convert the json string to a Data object, then decode it to a full configuration object

A couple of points of interest in the configuration json:

  • expectedDownloadSize defines the expected download size of an AI language model. This is important, since model downloads may require a significant amount of storage space on user devices. This value will be used by the mimik Client Library, to confirm that the required amount of storage space is available on the user's device, before starting the download.

  • url specifies the location of where the AI language model will be downloaded from. It can be anywhere you have access to.

With the AI language model configuration in hand, we can return it as an object, to the next step of this tutorial.

1: func languageModel() -> EdgeClient.AI.Model.CreateModelRequest? {
2:
3: let model = """
4: {
5: "expectedDownloadSize": 1800000000,
6: "object": "model",
7: "owned_by": "lmstudio-community",
8: "id": "lmstudio-community/gemma-1.1-2b-it-GGUF",
9: "url": "https://huggingface.co/lmstudio-community/gemma-1.1-2b-it-GGUF/resolve/main/gemma-1.1-2b-it-Q4_K_M.gguf?download=true"
10: }
11: """
12:
13: do {
14: let data = Data(model.utf8)
15: let decoder = JSONDecoder()
16: let decodedData = try decoder.decode(EdgeClient.AI.Model.CreateModelRequest.self, from: data)
17: return decodedData
18: } catch {
19: print("AI model error", error.localizedDescription)
20: return nil
21: }
22: }

Downloading AI language model

To start the AI language model download, we need to understand a few values passed to our method to kick things off.

  • accessToken is the same Access Token value that was previously in this tutorial.
  • apiKey is our own API key, which we establised earlier.
  • model is a value from the languageModel method, as explained in the previous topic.

With those values in hand, we can call the download method on the mimik Client Library, passing the accessToken, apiKey and model values to it as parameters.

In the downloadHandler, we pay attention to the download value. This value tells us the progress the AI language model download is making. In our example, the model has an expected download size of about 1.8GB, and depending on the device's internet and hardware speed, it might take a fair amount time to complete. Hence, in a production application, we'd want to make sure that the user was properly informed about the process. In our example, we just print out the download progress information in the console log.

Additionally, in the requestHandler, we are provided with the reference to the model download call request. We save the reference to the request, so that we might called on it later, if needed. For example to cancel it, or to inquire about its state.

Further more, we need to provide code in the final completionHandler. This is where we'll get the final result of the download request, once it fully concludes. We validate the call result, and if successful, return a success. If there is an issue, we return a failure.

We also want to clear our the request reference, we saved in the requestHandler, once the call concludes, whether successfully or not.

Having gone through the checks successfully, we should now have the AI language model downloaded locally, on the user's device.

In the next topic, we'll learn how to reference the downloaded model.

1: func downloadAIModel(accessToken: String, apiKey: String, model: EdgeClient.AI.Model.CreateModelRequest, useCase: EdgeClient.UseCase) async -> Result<Bool, NSError> {
2:
3: // Calling mimik Client Library to download the AI language model using the mILM edge microservice that was deployed as part of the mimik ai use case
4: switch await self.edgeClient.downloadAIModel(accessToken: accessToken, apiKey: apiKey, model: model, useCase: useCase, downloadHandler: { download in
5:
6: // Capturing the download progress information, exiting if there is an issue
7: guard case let .success(downloadProgress) = download else {
8: print("Model download error")
9: activeStream = nil
10: return
11: }
12:
13: // Printing out a formatted download progress value to the console log for the developer and user's benefit
14: let percent = String(format: "%.2f", ceil( (downloadProgress.size / downloadProgress.totalSize) * 10_000) / 100)
15: print("Model download progress: \(percent)%")
16:
17: }, requestHandler: { request in
18: // Keeping the reference to the AI language model download request, in case we want to examine its state or cancel it before it ends.
19: activeStream = request
20: }) {
21:
22: case .success(let downloadResult):
23: // Clearing out the AI language model download request reference
24: activeStream = nil
25: print("Model download success", downloadResult)
26: // AI language model download request successful, returning a success
27: return .success(true)
28:
29: case .failure(let error):
30: print("Model download error", error.localizedDescription)
31: // Clearing out the AI language model download request reference
32: activeStream = nil
33: // AI language model download request unsuccessful, returning a failure
34: return .failure(error)
35: }
36: }

Referencing downloaded AI language model

To find a reference to a downloaded AI language model, we need to understand a few values passed to our method to get started.

  • id is the AI language model value from the languageModel method.
  • accessToken is the same Access Token value used for other methods in this tutorial.
  • apiKey is our own API key, which we establised earlier.
  • useCase is a value from the integrateAI method, we encountered earlier in the tutorial.

With those values ready, we can call the available method on the mimik Client Library, passing the id, accessToken, apiKey and useCase values to it as parameters.

We evaluate the call, then filter through the returned list of available AI language models, looking for a model with the matching model in, then if found, returning a success with the matched model reference or a failure if there is an issue or no match.

With that, we have an object reference to the downloaded AI language model.

Let's get chatting with it!

1: func findAIModel(id: String, accessToken: String, apiKey: String, useCase: EdgeClient.UseCase) async -> Result<EdgeClient.AI.Model, NSError> {
2:
3: // Calling mimik Client Library to list the available AI language models
4: guard case let .success(models) = await edgeClient.availableAIModels(accessToken: accessToken, apiKey: apiKey, useCase: useCase) else {
5: // There was an issue with the call, returning a failure
6: return .failure(NSError(domain: "Error", code: 500))
7: }
8:
9: // Filtering the returned AI language models to find the one with the specified model id
10: guard let model = models.first(where: { model in
11: model.id == id
12: }) else {
13: // There is no match for the specified model id, returning a failure.
14: return .failure(NSError(domain: "No Matches Found", code: 500))
15: }
16:
17: print("Found matching model:", model)
18: // Matching model id was found, returning success with the matched AI language model reference
19: return .success(model)
20: }

Chatting with downloaded AI language model

To start chatting with a downloaded AI language model, we need to understand a few values passed to our method:

  • id is the AI language model value from the languageModel method.
  • question is the chat question we want to ask the AI language model.
  • useCase is a value from the integrateAI method, we encountered earlier in the tutorial.
  • accessToken is the same Access Token value used for other methods in this tutorial.
  • apiKey is our own API key, which we establised earlier.

With the passed through values ready, we can call the ask method on the mimik Client Library, passing the id, accessToken, apiKey, question and useCase values to it as parameters.

In the requestHandler, we are provided with the reference to the AI language model chat request. We save the reference, so that we might called on it later, if needed. For example to cancel it, or to inquire about its state.

The most active part of the code is the streamHandler. This is where the responses to our chat question, will be stream from the AI language model. In the streamHandler, we validate each incoming stream response individually. If successful, we pass it onto a specialized processAIChat method for further processing. If there is an issue, we move on and wait for more stream entries. This is because the chat stream is considered active, until it finally concluded in the completionHandler.

We have separated the areas of concerns for the stream handling, and stream response processing, for better code example clarity.

Further more, we need to provide code in the final completionHandler. This is where, we'll get the final result of the chat request, once it fully concludes. We validate the call result. If successful, we return a success. If there is an issue, we return a failure.

1: func askAIModel(id: String, question: String, useCase: EdgeClient.UseCase, accessToken: String, apiKey: String) async -> Result<Void, NSError> {
2:
3: // Calling mimik Client Library to start a chat stream with the downloaded AI language model
4: switch await edgeClient.askAI(modelId: id, accessToken: accessToken, apiKey: apiKey, question: question, useCase: useCase, streamHandler: { stream in
5:
6: // Validating incoming chat stream responses
7: switch stream {
8: case .success(let chatStream):
9: // Incoming AI chat stream was successful, sending data for further processing
10: processAIChat(stream: chatStream)
11: case .failure(let error):
12: // Incoming AI chat stream was unsuccessful, waiting for more incoming stream data
13: print("AI stream error", error.localizedDescription)
14: }
15:
16: }, requestHandler: {
17: request in
18: // Keeping the reference to the AI language model chat request, in case we want to examine its state or cancel it before it ends.
19: activeStream = request
20: }) {
21: case .success:
22: // Clearing out the AI chat request reference
23: activeStream = nil
24: // AI chat request concluded successfully, returning a void success
25: return .success(())
26: case .failure(let error):
27: // Clearing out the AI chat request reference
28: activeStream = nil
29: // AI chat request concluded unsuccessfully, returning a failure
30: return .failure(error)
31: }
32: }

Processing AI language model chat stream responses

Processing raw incoming stream of responses from AI language models could potentially be tricky. This is where the mimik Client Library is helping with the chat stream categorization. The processing accurancy depends on the AI language model used, but in our example it should be pretty straightforward.

We use the switch statement to categorize the incoming chat response, using its type.

For example the content type, is something that you'd normally want to show to the user in the UI, or combine together to create full sentences. In our example, we just print the content as it comes in, to the console output.

When we see chat response about the AI language model being loaded, or ready, we log them separately in the console output with specialized messages.

Once the sorting algorithm gets to the streamDone category, it means that the AI language model chat stream has concluded.

Please note that the stream of responses, might sometime take several minutes to conclude, depending on the AI language model and the chat question.

Additionally, if the user decides they do not want to wait for the conclusion of the chat response stream, the developer has the option to cancel it, using the request reference saved in the requestHandler handler.

1: func processAIChat(stream: EdgeClient.AI.Model.CompletionType) {
2:
3: // Sorting the incoming AI chat stream data by its content type
4: switch stream {
5:
6: // Main AI language model chat stream content type
7: case .content(let content):
8:
9: // Checking if the stream contains specific key words, indicating the AI model states
10: if response.contains("Model Ready") || response.contains("Model Loading") {
11: response = content
12: return
13: }
14:
15: // Printing the AI chat stream content to the console. Normally, you'd want to show this to the user in the UI, or process further elsewhere.
16: print(content)
17:
18: // AI language model is being loaded
19: case .modelLoading:
20: print("Model Loading, please wait")
21:
22: // AI language model is ready
23: case .modelReady:
24: print("Model Ready, please wait")
25:
26: // Stream concluded
27: case .streamDone:
28: print("Stream Done")
29:
30: // Stream contains additional stream types
31: case .comment, .event, .id, .retry:
32: print("other stream types")
33:
34: // Stream contains unknown, unsupported content type.
35: @unknown default:
36: print("unknown stream type")
37: }
38: }

Integrating and Downloading AI in one go

Similarly to the integrateAI method, that we encountered earlier in this tutorial, here, we will also integrate the mimik ai use case. But this time, we'll also initiate a download of an AI language model in the same method. This means, that we won't have to handle the model download separately, making it available to the application environment at the conclusion of this one step.

As before, we first gather a few configuration values.

The API key, is like a password that you choose yourself. It will be used to secure the API communication channels between the mimik ai use case components and your application environment.

Next, we gather the url to the use case configuration json, which was pre-configured to make the deployment easier for developers. It instructs the mimik Client Library to download the mILM edge microservice and configure it a specific way. It also informs the mimik Client Library about the available RESTful endpoints and their configurations on the edge microservice.

Then, we need to decide which AI language model to download. To simplify, we have prepared an example definition of a third-party model. Essentially, you can work with any AI language model, that fits within the hardware and software capabilities of your iOS device.

With that, we have all the information needed to call the deploy use case method on the mimik Client Library, passing the accessToken (that was passed-through to our method), apiKey, configUrl and languageModel values as parameters.

Unlike in the earlier integrateAI method, this time, we pay attention to the download value in the downloadHandler. This value tells us the progress the AI language model download is making. In our example, the model has an expected download size of about 1.8GB, and depending on the device's internet and hardware speed, it might take a fair amount time to complete. Hence, in a production application, we'd want to make sure that the user was properly informed about the process. In our example, we just print out the download progress information in the console log.

Additionally, in the requestHandler, we are provided with the reference to the integration and download call request. We save the reference, so that we might called on it later, if needed. For example to cancel it, or to inquire about its state.

Further more, we need to provide code in the final completionHandler. This is where, we'll get the final result of the integration and download request, once it fully concludes. We validate the call result. If successful, we return a success. If there is an issue, we return a failure.

We also want to clear our the request reference, we saved in the requestHandler, once the call concludes, whether successfully or not.

Having gone through the checks successfully, we should now have the mimik ai use case integrated, and the AI language model of our choice downloaded locally on the user's device.

1: func integrateAndDownloadAI(accessToken: String) async -> Result<EdgeClient.UseCase.Deployment, NSError> {
2:
3: // Your API key, used to secure the API communication channels between the mimik ai use case and the application environment.
4: let apiKey = "1234-5678-910A"
5:
6: // A url to the mimik ai use case configuration json. It is pre-configured with values for this example
7: let aiPackageConfigUrl = "https://github.com/mimikgit/cocoapod-mimOE-SE-iOS-developer/releases/download/5.6.2/mimik-ai-use-case-config.json"
8:
9: guard let languageModel = LoadConfig.aiLanguageModel() else {
10: return .failure(NSError(domain: "Error", code: 500))
11: }
12:
13: // Calling mimik Client Library to integrate the mimik ai use case from a configuration url. We'll intentionally leave the AI language model download to a separate call
14: switch await self.edgeClient.integrateAI(accessToken: accessToken, apiKey: apiKey, configUrl: aiPackageConfigUrl, model: languageModel, downloadHandler: { download in
15:
16: // Capturing the download progress information, exiting if there is an issue
17: guard case let .success(downloadProgress) = download else {
18: print("Model download error")
19: activeStream = nil
20: return
21: }
22:
23: // Printing out a formatted download progress value to the console log for the developer and user's benefit
24: let percent = String(format: "%.2f", ceil( (downloadProgress.size / downloadProgress.totalSize) * 10_000) / 100)
25: print("Model download progress: \(percent)%")
26:
27:
28: }, requestHandler: { request in
29: // Keeping the reference to the AI integration request request, in case we want to examine its state or cancel it before it ends.
30: activeStream = request
31: }) {
32:
33: // Validating the result of the AI integration request
34: case .success(let result):
35:
36: // Storing use case deployment information in UserDefaults
37: if let encoded = try? JSONEncoder().encode(result) {
38: UserDefaults.standard.set(encoded, forKey: kAIUseCaseDeployment)
39: UserDefaults.standard.synchronize()
40: }
41:
42: // Clearing out the AI integration request reference
43: activeStream = nil
44: print("AI integration call successful")
45: // AI integration request successful, returning a success with the deployed mimik ai use case reference
46: return .success(result)
47:
48: case .failure(let error):
49: print("AI integration call unsuccessful", error.localizedDescription)
50: // Clearing out the AI integration request reference
51: activeStream = nil
52: // AI integration request unsuccessful, returning a failure
53: return .failure(error)
54: }
55: }

Example Xcode project also works Offline

Since the AI language model gets fully downloaded onto your device, the example application can chat with the model even when the device's internet connection is disabled. For example in an airplane mode. Of course, you'd have to have the AI language model downloaded first, before going offline.

iOS application project example on GitHub.

Test Flight

This example application is also available as a pre-configured download on Test Flight.

  • Open and accept this Test Flight link on the iOS device you want the application to install on.
  • Open the application once done installing through Test Flight.

Additional reading

In order to get more out of this article, the reader could further familiarize themselves with the following concepts and techniques:

Was this article helpful?

© mimik technology, Inc. all rights reserved