Dimensions of the embeddings using the embedding model: llama-text-embed-2 hosted by pinecone are not changing

def get_embedding(texts):
“”"
Get embeddings for a list of texts
“”"
embeddings = pc.inference.embed(
model=“llama-text-embed-v2”,
inputs=texts,
parameters={
“input_type”: “passage”,
“dimension”: 2048
}
)
return embeddings

I have setup my function like this as per documentation in pinecone , however it is still not working and give me default dimension 1024 . I have tried changing the headers to : pc = Pinecone(api_key=PINECONE_API, headers={
“X-Pinecone-API-Version”: “2025-04”
}) but it still not working. Please guide me on to fix this issue . Thank you.

@ahmedosamaizhar21 Thanks for reaching out! Currently, the latest Python SDK / client version (6.0.2) supports API Version 2025-01. A new SDK version supporting the latest API version (2025-04) is set to release by the end of April.

In the meantime, you can send a direct request to the REST API:

import requests
url = "https://api.pinecone.io/embed"
headers = {
    "Content-Type": "application/json",
    "Accept": "application/json",
    "api-key": "<PINECONE-API-KEY>",
    "x-pinecone-api-version": "2025-04",
}
data = {
    "model": "llama-text-embed-v2",
    "parameters": {
        "input_type": "passage",
        "dimension": 2048
    },
    "inputs": [
        {
            "text": "sample text"
        }
    ]
}
requests.post(url, headers=headers, json=data)

This should produce a text embedding vector with dimension 2048. Dynamic MRL dimensions for llama-text-embed-v2 is supported starting in API version 2025-04.