- Tutorials
- Using mimik ai for all iOS AI applications
Using mimik ai for all iOS AI applications
Objective
The objective of this article is to demonstrate how to integrate AI components, including language models, to iOS application environment using the mimik Client Library.
Intended Readers
The intended readers of this document are iOS software developers, who want to familiarize themselves with how the mimik Client Library interfaces with mimik ai.
What You'll Be Doing
Learning about topics relevant to working with the mimik Client Library AI interfaces, such as:
- Integrating AI components
- Configuring AI language model source
- Downloading AI language model
- Referencing downloaded AI language model
- Chatting with downloaded AI language model
- Processing AI language model chat stream responses
- Integrating and Downloading AI in one go
Prerequisites
Understanding the mimik Client Library components integration and initialization process as layed out in this article.
Understanding how to work with the mim OE Runtime in an iOS application.
Attaching a real iOS device to the development Mac and selecting it as the build target. This won't work with the iOS simulator.
NOTE: Working with the iOS Simulator and the mimik Client Libraries entails some special consideration. For more more information about iOS Simulator support see this tutorial. |
---|
Overview
Our example AI use case consist of the following components:
As a package deployed via the mimik Client Library to your iOS application environment, mimik ai components add a simple, yet powerful interface to the world of AI language models.
So, in order to get your application ready to start communicating with AI language models, we need to deploy the mimik ai use case in its environment.
To simplify the work, we have prepared a configuration json file, that describes the way the mimik ai use case package needs to be deployed, by way of the deploy use case method on the mimik Client Library. This adds everything needed for your application to be able to start communicating with AI language models.
In this tutorial, we won't cover how to integrate and initialize the mimik Client Library or how to start mim OE Runtime. See the prerequisites for tutorial links.
So, let's begin!
Integrating AI components
As seen in the code example below, we begin by gathering a few configuration values.
First, the API key acts like a password that you choose yourself. It will be used to secure the communication channels between the mimik ai use case components and your application environment.
Next, we collect the URL to the use case configuration JSON, which has been pre-configured to simplify deployment for developers. This configuration instructs the mimik Client Library to download the mILM edge microservice and configure it in a specific way. It also provides the mimik Client Library with information about the available RESTful endpoints and their configurations on the edge microservice.
With that, we have all the information needed to call the deploy use case method on the mimik Client Library, passing the accessToken
(passed-through to our method), apiKey
and configUrl
values as parameters.
Notice that we intentionally omitted the model
parameter. This decision was made to demonstrate the process of downloading the AI language model as a separate step. Although it is technically possible to combine this into a single step (as will be shown later in the tutorial), we chose to separate the concerns for improved readability. This also explains why there is no code present in the downloadHandler
function at this point.
Next, we need to provide code in two additional handlers.
First, the requestHandler
provides a reference to your AI integration call request. We save this reference so that we can call on it later, if needed. For example, we might use it to cancel the request or inquire about its state.
Next, we need to implement the completionHandler
, which will handle the final result of the integration call once it has fully concluded.
In this handler, we validate the result of the call. If it’s successful, we return a success response along with a reference to the deployed use case. If there’s an issue, we return a failure response.
It's important to keep the reference to the deployed mimik ai use case, as we'll need it when making chat requests to the AI language model. You can either keep the reference in memory (as shown in the example below, for simplicity) or save it permanently in a decodable object.
Additionally, we want to clear the request reference that was saved in the requestHandler
once the call concludes, regardless of whether it was successful or not.
As with all example code in this tutorial, heavily commented code will follow the detailed descriptions for each topic.
1: func integrateAI(accessToken: String) async -> Result<EdgeClient.UseCase, NSError> {2:3: // Your API key, used to secure the API communication channels between the mimik ai use case and the application environment.4: let apiKey = "1234-5678-910A"5:6: // A url to the mimik ai use case configuration json. It is pre-configured with values for this example7: let configUrl = "https://github.com/mimikgit/cocoapod-mim-OE-ai-SE-iOS-developer/releases/download/5.8.0/mimik-ai-use-case-config-2024112801.json"8:9: // Calling mimik Client Library to integrate the mimik ai use case from a configuration url. We'll intentionally leave the AI language model download to a separate call10: switch await self.edgeClient.integrateAI(accessToken: accessToken, apiKey: apiKey, configUrl: configUrl, model: nil, downloadHandler: { _ in11: // Not downloading the AI language model in this example method, so there is no need for any download handler code12: }, requestHandler: { [weak self] request in13: // Keeping the reference to the AI integration request request, in case we want to examine its state or cancel it before it ends.14: self?.activeStream = request15: }) {16:17: // Validating the result of the AI integration request18: case .success(let result):19: // Clearing out the AI integration request reference20: activeStream = nil21: print("AI integration call successful")22: // AI integration request successful, returning a success with the deployed mimik ai use case reference23: return .success(result)24:25: case .failure(let error):26: print("AI integration call unsuccessful", error.localizedDescription)27: // Clearing out the AI integration request reference28: activeStream = nil29: // AI integration request unsuccessful, returning a failure30: return .failure(error)31: }32: }
With that, with have the mimik ai use case deployed. Next, we'll start looking at downloading AI language models.
Configuring AI language model source
Before we can begin downloading AI language models locally to the user's device, there are a few preparatory steps we need to take as shown in the code example below.
Step 1: Choose an AI Language Model
The first task is to decide which AI language model to download. To simplify this process, we’ve provided an example definition of a third-party model. However, you can work with any AI language model that fits within the hardware and software capabilities of your iOS device.
In the example code below, we’ve defined a pre-configured AI language model as a JSON string.
We then convert the JSON string to a Data
object, which is decoded into a full download configuration object.
Key Points in the Configuration JSON
A few key properties in the configuration JSON are worth noting:
expectedDownloadSize
: This defines the expected download size of the AI language model. This is important because downloading large models may require significant storage space on the user's device. The mimik Client Library will use this value to ensure there is enough available storage on the device before starting the download.url
: This specifies the location from which the AI language model will be downloaded. It can be any URL you have access to.
Next Steps
With the AI language model configuration in hand, we can pass it as an object to the next step of this tutorial.
1: func languageModel() -> EdgeClient.AI.Model.CreateModelRequest? {2:3: let model = """4: {5: "expectedDownloadSize": 1800000000,6: "object": "model",7: "owned_by": "lmstudio-community",8: "id": "lmstudio-community/gemma-1.1-2b-it-GGUF",9: "url": "https://huggingface.co/lmstudio-community/gemma-1.1-2b-it-GGUF/resolve/main/gemma-1.1-2b-it-Q4_K_M.gguf?download=true"10: }11: """12:13: do {14: let data = Data(model.utf8)15: let decoder = JSONDecoder()16: let decodedData = try decoder.decode(EdgeClient.AI.Model.CreateModelRequest.self, from: data)17: return decodedData18: } catch {19: print("AI model error", error.localizedDescription)20: return nil21: }22: }
Downloading AI language model
To initiate the AI language model download, we need to understand a few key values that are passed to the method as shown in the code example below.
Key Values for Starting the Download
accessToken
: This is the same Access Token value that we used earlier in this tutorial.apiKey
: This is the API key we established earlier.model
: This value comes from thelanguageModel
method, as explained in the previous section.
Initiating the Download
With these values in hand, we can call the download model method on the mimik Client Library, passing the accessToken
, apiKey
, and model
values as parameters.
Handling the Download Process
In the downloadHandler
, we monitor the download
value, which provides the progress of the AI language model download. For example, if the model is approximately 1.8GB in size, the download might take some time depending on the device’s internet speed and hardware capabilities. In a production application, we would want to inform the user about the download progress. In this example, we simply print the progress to the console.
Handling Requests and Completion
In the requestHandler
, we are provided with a reference to the model download request. We save this reference so that we can use it later, if needed—such as to cancel the request or check its status.
We also need to implement the completionHandler
, which will be called once the download request completes. Here, we validate the result of the call. If successful, we return a success response; if there’s an issue, we return a failure response.
Finalizing the Request
Once the download is complete, whether successfully or not, we clear the request reference saved in the requestHandler
.
Conclusion
After completing these checks, the AI language model should now be successfully downloaded and stored locally on the user's device.
In the next topic, we will learn how to reference and use the downloaded model.
1: func downloadAIModel(accessToken: String, apiKey: String, model: EdgeClient.AI.Model.CreateModelRequest, useCase: EdgeClient.UseCase) async -> Result<Bool, NSError> {2:3: // Calling mimik Client Library to download the AI language model using the mILM edge microservice that was deployed as part of the mimik ai use case4: switch await self.edgeClient.downloadAI(model: model, accessToken: accessToken, apiKey: apiKey, useCase: useCase, downloadHandler: { [weak self] download in5:6: // Capturing the download progress information, exiting if there is an issue7: guard case let .success(downloadProgress) = download else {8: print("Model download error")9: self?.activeStream = nil10: return11: }12:13: // Printing out a formatted download progress value to the console log for the developer and user's benefit14: let percent = String(format: "%.2f", ceil( (downloadProgress.size / downloadProgress.totalSize) * 10_000) / 100)15: print("Model download progress: \(percent)%")16:17: }, requestHandler: { [weak self] request in18: // Keeping the reference to the AI language model download request, in case we want to examine its state or cancel it before it ends.19: self?.activeStream = request20: }) {21:22: case .success(let downloadResult):23: // Clearing out the AI language model download request reference24: activeStream = nil25: print("Model download success", downloadResult)26: // AI language model download request successful, returning a success27: return .success(true)28:29: case .failure(let error):30: print("Model download error", error.localizedDescription)31: // Clearing out the AI language model download request reference32: activeStream = nil33: // AI language model download request unsuccessful, returning a failure34: return .failure(error)35: }36: }
Referencing downloaded AI language model
To obtain a reference to a downloaded AI language model, we need to understand a few key values that are passed to our method to get started as shown in the code example below.
Key Values for Retrieving the Model
id
: This is the AI language model value from thelanguageModel
method.accessToken
: The same Access Token value used in other methods throughout this tutorial.apiKey
: Our API key, which we established earlier.useCase
: This value comes from theintegrateAI
method, which we encountered earlier in the tutorial.
Retrieving the Model
With these values prepared, we can call the ai models method on the mimik Client Library, passing the id
, accessToken
, apiKey
, and useCase
values as parameters.
Evaluating the Response
Once the call is made, we evaluate the result. If a match for the provided id
is found, we return a success response along with the reference to the matched model. If there is an issue or no match is found, we return a failure.
Conclusion
At this point, we now have an object reference to the downloaded AI language model and are ready to begin interacting with it.
Let’s get chatting with it!
1: func findAIModel(id: String, accessToken: String, apiKey: String, useCase: EdgeClient.UseCase) async -> Result<EdgeClient.AI.Model, NSError> {2:3: // Calling mimik Client Library to list the available AI language models4: guard case let .success(model) = await edgeClient.aiModel(id: id, accessToken: accessToken, apiKey: apiKey, useCase: useCase) else {5: // There was an issue with the call, returning a failure6: return .failure(NSError(domain: "Error", code: 500))7: }8:9: print("Found matching model:", model)10: // Matching model id was found, returning success with the matched AI language model reference11: return .success(model)12: }
Chatting with downloaded AI language model
To start chatting with a downloaded AI language model, we need to understand a few key values passed to our method as shown in the code example below.
Key Values for Chatting with the Model
id
: The AI language model value from thelanguageModel
method.question
: The chat question we want to ask the AI language model.useCase
: A value from theintegrateAI
method, which we encountered earlier in the tutorial.accessToken
: The same Access Token value used in other methods throughout this tutorial.apiKey
: Our API key, which we established earlier.
Initiating the Chat
With these values ready, we can call the ask ai method on the mimik Client Library, passing the id
, accessToken
, apiKey
, question
, and useCase
values as parameters.
Handling the Request
In the requestHandler
, we receive a reference to the AI language model chat request. We save this reference so that we can use it later, if needed—for example, to cancel the request or inquire about its status.
Stream Handling and Response Processing
The most active part of the code is the streamHandler
, where responses to our chat question are streamed from the AI language model. In the streamHandler
, we validate each incoming stream response individually. If the response is successful, we pass it to a specialized processAIChat
method for further processing. If there’s an issue, we simply move on and wait for more stream entries, as the chat stream remains active until it is fully concluded in the completionHandler
.
We’ve separated the areas of concern for stream handling and response processing to improve the clarity of the code example.
Handling the Completion
Finally, in the completionHandler
, we receive the final result of the chat request once it fully concludes. Here, we validate the result of the call. If successful, we return a success response; if there’s an issue, we return a failure response.
1: func askAIModel(id: String, question: String, useCase: EdgeClient.UseCase, accessToken: String, apiKey: String) async -> Result<Void, NSError> {2:3: // Calling mimik Client Library to start a chat stream with the downloaded AI language model4: switch await edgeClient.askAIModel(id: id, accessToken: accessToken, apiKey: apiKey, question: question, useCase: useCase, streamHandler: { [weak self] stream in5:6: // Validating incoming chat stream responses7: switch stream {8: case .success(let chatStream):9: // Incoming AI chat stream was successful, sending data for further processing10: self?.processAIChat(stream: chatStream)11: case .failure(let error):12: // Incoming AI chat stream was unsuccessful, waiting for more incoming stream data13: print("AI stream error", error.localizedDescription)14: }15:16: }, requestHandler: { [weak self] request in17: // Keeping the reference to the AI language model chat request, in case we want to examine its state or cancel it before it ends.18: self?.activeStream = request19: }) {20: case .success:21: // Clearing out the AI chat request reference22: activeStream = nil23: // AI chat request concluded successfully, returning a void success24: return .success(())25: case .failure(let error):26: // Clearing out the AI chat request reference27: activeStream = nil28: // AI chat request concluded unsuccessfully, returning a failure29: return .failure(error)30: }31: }
Processing AI language model chat stream responses
Processing raw incoming streams of responses from AI language models can be tricky. This is where the mimik Client Library helps by categorizing the chat stream as shown in the code example below. While the accuracy of the processing depends on the AI language model being used, it should be relatively straightforward in our example.
Categorizing Incoming Responses
We use a switch
statement to categorize the incoming chat responses based on their type.
For example, the content
type represents information that you’d typically want to display to the user in the UI or combine to form complete sentences. In our example, we simply print the content to the console as it arrives.
Special Response Handling
When a chat response indicates that the AI language model is either being loaded or is ready, we log these messages separately in the console with specialized messages for clarity.
Conclusion of the Stream
Once the sorting algorithm reaches the streamDone
category, it signals that the AI language model chat stream has concluded.
Please note that the stream of responses may sometimes take some time to conclude, depending on the complexity of the AI language model, the chat question and the hardware capabilities of your device.
Cancelling the Stream
If the user decides they no longer wish to wait for the chat response stream to conclude, the developer can cancel the stream by using the request reference saved in the requestHandler
.
1: func processAIChat(stream: EdgeClient.AI.Model.CompletionType) {2:3: // Sorting the incoming AI chat stream data by its content type4: switch stream {5:6: // Main AI language model chat stream content type7: case .content(let content):8:9: // Checking if the stream contains specific key words, indicating the AI model states10: if content.contains("Model Ready") || content.contains("Model Loading") {11: return12: }13:14: // Printing the AI chat stream content to the console. Normally, you'd want to show this to the user in the UI, or process further elsewhere.15: print(content)16:17: // AI language model is being loaded18: case .modelLoading:19: print("Model Loading, please wait")20:21: // AI language model is ready22: case .modelReady:23: print("Model Ready, please wait")24:25: // Stream concluded26: case .streamDone:27: print("Stream Done")28:29: // Stream contains additional stream types30: case .comment, .event, .id, .retry:31: print("other stream types")32:33: // Stream contains unknown, unsupported content type.34: @unknown default:35: print("unknown stream type")36: }37: }
Integrating and Downloading AI in one go
Similar to the integrateAI
method we encountered earlier in this tutorial, we will integrate the mimik ai use case here as well. However, this time, we will also initiate the download of an AI language model within the same method as shown in the code example below. This means we won’t need to handle the model download separately, making it available to the application environment once this single step is complete.
Step 1: Gathering Configuration Values
As before, we begin by gathering a few essential configuration values:
- API Key: This acts like a password that you choose. It is used to secure the communication channels between the mimik ai use case components and your application environment.
- Config URL: This is the URL pointing to the pre-configured use case configuration JSON. This file simplifies deployment by instructing the mimik Client Library to download and configure the mILM (mimik ai Language Model) edge microservice. It also informs the mimik Client Library about the available RESTful endpoints and their configurations on the edge microservice.
Step 2: Deciding Which AI Language Model to Download
Next, we need to decide which AI language model to download. To simplify this, we have provided an example definition of a third-party model. In practice, you can work with any AI language model that fits within the hardware and software capabilities of your iOS device.
Step 3: Calling the deployUseCase
Method
With all the necessary information, we can now call the deploy use case method on the mimik Client Library, passing the following values as parameters:
accessToken
: The token passed to our method.apiKey
: Our own API key.configUrl
: The URL to the pre-configured use case configuration JSON.languageModel
: The AI language model we want to download.
Step 4: Monitoring the Download Progress
Unlike the earlier integrateAI
method, we now need to monitor the download
value in the downloadHandler
. This value indicates the progress of the AI language model download.
For example, if the model is about 1.8GB in size, the download may take some time depending on the device's internet and hardware speeds. In a production application, we would want to keep the user informed about the progress. In this example, we simply print out the download progress to the console.
Step 5: Handling the Request
In the requestHandler
, we are provided with a reference to the integration and download request. We save this reference so that we can use it later if needed—such as to cancel the request or inquire about its status.
Step 6: Finalizing the Process
Next, we provide code for the completionHandler
, where we will receive the final result of the integration and download request once it fully concludes.
- If the result is successful, we return a success response.
- If there is an issue, we return a failure response.
Afterward, we clear the request reference that we saved in the requestHandler
once the call concludes, regardless of whether it was successful or not.
Conclusion
Once all checks are completed successfully, we will have both the mimik ai use case integrated and the AI language model downloaded locally to the user’s device.
1: func integrateAndDownloadAI(accessToken: String) async -> Result<EdgeClient.UseCase, NSError> {2:3: // Your API key, used to secure the API communication channels between the mimik ai use case and the application environment.4: let apiKey = "1234-5678-910A"5:6: // A url to the mimik ai use case configuration json. It is pre-configured with values for this example7: let aiPackageConfigUrl = "https://github.com/mimikgit/cocoapod-mimOE-SE-iOS-developer/releases/download/5.6.2/mimik-ai-use-case-config.json"8:9: guard let languageModel = languageModel() else {10: return .failure(NSError(domain: "Error", code: 500))11: }12:13: // Calling mimik Client Library to integrate the mimik ai use case from a configuration url. We'll intentionally leave the AI language model download to a separate call14: switch await self.edgeClient.integrateAI(accessToken: accessToken, apiKey: apiKey, configUrl: aiPackageConfigUrl, model: languageModel, downloadHandler: { [weak self] download in15:16: // Capturing the download progress information, exiting if there is an issue17: guard case let .success(downloadProgress) = download else {18: print("Model download error")19: self?.activeStream = nil20: return21: }22:23: // Printing out a formatted download progress value to the console log for the developer and user's benefit24: let percent = String(format: "%.2f", ceil( (downloadProgress.size / downloadProgress.totalSize) * 10_000) / 100)25: print("Model download progress: \(percent)%")26:27: }, requestHandler: { [weak self] request in28: // Keeping the reference to the AI integration request request, in case we want to examine its state or cancel it before it ends.29: self?.activeStream = request30: }) {31:32: // Validating the result of the AI integration request33: case .success(let result):34:35: // Clearing out the AI integration request reference36: activeStream = nil37: print("AI integration call successful")38: // AI integration request successful, returning a success with the deployed mimik ai use case reference39: return .success(result)40:41: case .failure(let error):42: print("AI integration call unsuccessful", error.localizedDescription)43: // Clearing out the AI integration request reference44: activeStream = nil45: // AI integration request unsuccessful, returning a failure46: return .failure(error)47: }48: }
Example Xcode project also works Offline
Since the AI language model gets fully downloaded onto your device, the example application can chat with the model even when the device's internet connection is disabled. For example in an airplane mode. Of course, you'd have to have the AI language model downloaded first, before going offline.
iOS application project example on GitHub.
Test Flight
This example application is also available as a pre-configured download on Test Flight.
- Open and accept this Test Flight link on the iOS device you want the application to install on.
- Open the application once done installing through Test Flight.
Additional reading
In order to get more out of this article, the reader could further familiarize themselves with the following concepts and techniques: