Based on the documentation on namespace and Databricks integration, is the namespace feature a hard requirement to use Pinecone Spark connector to save the embeddings to index?
Now that the free tier with gap_starter env does not support namespace, any walk-rounds to use Pinecone with Spark in Databricks?
If you want to partition data, i would assume yes since there is no other way for that integration to work. If you just want to shove everything into the default namespace you can just omit the option but I imagine you might have issues putting everything into the same namespace since it will impact search results