GenerativeModel
@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)
public final class GenerativeModel : Sendable
A type that represents a remote multimodal model (like Gemini), with the ability to generate content based on various input types.
-
Generates content from String and/or image inputs, given to the model as a prompt, that are representable as one or more
Part
s.Since
Part
s do not specify a role, this method is intended for generating content from zero-shot or “direct” prompts. For few-shot prompts, seegenerateContent(_ content: [ModelContent])
.Throws
AGenerateContentError
if the request failed.Declaration
Swift
public func generateContent(_ parts: any PartsRepresentable...) async throws -> GenerateContentResponse
Parameters
parts
The input(s) given to the model as a prompt (see
PartsRepresentable
for conforming types).Return Value
The content generated by the model.
-
Generates new content from input content given to the model as a prompt.
Throws
AGenerateContentError
if the request failed.Declaration
Swift
public func generateContent(_ content: [ModelContent]) async throws -> GenerateContentResponse
Parameters
content
The input(s) given to the model as a prompt.
Return Value
The generated content response from the model.
-
Generates content from String and/or image inputs, given to the model as a prompt, that are representable as one or more
Part
s.Since
Part
s do not specify a role, this method is intended for generating content from zero-shot or “direct” prompts. For few-shot prompts, seegenerateContentStream(_ content: @autoclosure () throws -> [ModelContent])
.Declaration
Swift
@available(macOS 12.0, *) public func generateContentStream(_ parts: any PartsRepresentable...) throws -> AsyncThrowingStream<GenerateContentResponse, Error>
Parameters
parts
The input(s) given to the model as a prompt (see
PartsRepresentable
for conforming types).Return Value
A stream wrapping content generated by the model or a
GenerateContentError
error if an error occurred. -
Generates new content from input content given to the model as a prompt.
Declaration
Swift
@available(macOS 12.0, *) public func generateContentStream(_ content: [ModelContent]) throws -> AsyncThrowingStream<GenerateContentResponse, Error>
Parameters
content
The input(s) given to the model as a prompt.
Return Value
A stream wrapping content generated by the model or a
GenerateContentError
error if an error occurred. -
Creates a new chat conversation using this model with the provided history.
Declaration
Swift
public func startChat(history: [ModelContent] = []) -> Chat
-
Runs the model’s tokenizer on String and/or image inputs that are representable as one or more
Part
s.Since
Part
s do not specify a role, this method is intended for tokenizing zero-shot or “direct” prompts. For few-shot input, seecountTokens(_ content: @autoclosure () throws -> [ModelContent])
.Declaration
Swift
public func countTokens(_ parts: any PartsRepresentable...) async throws -> CountTokensResponse
Parameters
parts
The input(s) given to the model as a prompt (see
PartsRepresentable
for conforming types).Return Value
The results of running the model’s tokenizer on the input; contains
totalTokens
. -
Runs the model’s tokenizer on the input content and returns the token count.
Declaration
Swift
public func countTokens(_ content: [ModelContent]) async throws -> CountTokensResponse
Parameters
content
The input given to the model as a prompt.
Return Value
The results of running the model’s tokenizer on the input; contains
totalTokens
.