diff --git a/tutorials/27_First_RAG_Pipeline.ipynb b/tutorials/27_First_RAG_Pipeline.ipynb index 68e4188..f24153b 100644 --- a/tutorials/27_First_RAG_Pipeline.ipynb +++ b/tutorials/27_First_RAG_Pipeline.ipynb @@ -28,18 +28,6 @@ "For this tutorial, you'll use the Wikipedia pages of [Seven Wonders of the Ancient World](https://en.wikipedia.org/wiki/Wonders_of_the_World) as Documents, but you can replace them with any text you want.\n" ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "QXjVlbPiO-qZ" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)" - ] - }, { "cell_type": "markdown", "metadata": { @@ -361,7 +349,7 @@ "### Initialize a ChatGenerator\n", "\n", "\n", - "ChatGenerators are the components that interact with large language models (LLMs). Now, set `OPENAI_API_KEY` environment variable and initialize a [OpenAIChatGenerator](https://docs.haystack.deepset.ai/docs/OpenAIChatGenerator) that can communicate with OpenAI GPT models. As you initialize, provide a model name:" + "ChatGenerators are the components that interact with large language models (LLMs). Now, set `OPENAI_API_KEY` environment variable and initialize a [OpenAIChatGenerator](https://docs.haystack.deepset.ai/docs/openaichatgenerator) that can communicate with OpenAI GPT models. As you initialize, provide a model name:" ] }, { @@ -626,4 +614,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/tutorials/29_Serializing_Pipelines.ipynb b/tutorials/29_Serializing_Pipelines.ipynb index 9135833..ed1e65c 100644 --- a/tutorials/29_Serializing_Pipelines.ipynb +++ b/tutorials/29_Serializing_Pipelines.ipynb @@ -30,18 +30,6 @@ "Although it's possible to serialize into other formats too, Haystack supports YAML out of the box to make it easy for humans to make changes without the need to go back and forth with Python code. In this tutorial, we will create a very simple pipeline in Python code, serialize it into YAML, make changes to it, and deserialize it back into a Haystack `Pipeline`." ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "9smrsiIqfS7J" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)" - ] - }, { "cell_type": "markdown", "metadata": { diff --git a/tutorials/30_File_Type_Preprocessing_Index_Pipeline.ipynb b/tutorials/30_File_Type_Preprocessing_Index_Pipeline.ipynb index 9bb496e..49332e9 100644 --- a/tutorials/30_File_Type_Preprocessing_Index_Pipeline.ipynb +++ b/tutorials/30_File_Type_Preprocessing_Index_Pipeline.ipynb @@ -40,18 +40,6 @@ "Optionally, you can keep going to see how to use these documents in a query pipeline as well." ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "rns_B_NGN0Ze" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)" - ] - }, { "cell_type": "markdown", "metadata": { diff --git a/tutorials/31_Metadata_Filtering.ipynb b/tutorials/31_Metadata_Filtering.ipynb index a4516c7..6871e8c 100644 --- a/tutorials/31_Metadata_Filtering.ipynb +++ b/tutorials/31_Metadata_Filtering.ipynb @@ -28,18 +28,6 @@ "Although new retrieval techniques are great, sometimes you just know that you want to perform search on a specific group of documents in your document store. This can be anything from all the documents that are related to a specific _user_, or that were published after a certain _date_ and so on. Metadata filtering is very useful in these situations. In this tutorial, we will create a few simple documents containing information about Haystack, where the metadata includes information on what version of Haystack the information relates to. We will then do metadata filtering to make sure we are answering the question based only on information about Haystack 2.0.\n" ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "tM3U5KyegTAE" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)" - ] - }, { "cell_type": "markdown", "metadata": { @@ -269,4 +257,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/tutorials/32_Classifying_Documents_and_Queries_by_Language.ipynb b/tutorials/32_Classifying_Documents_and_Queries_by_Language.ipynb index 3f103cb..45e9a58 100644 --- a/tutorials/32_Classifying_Documents_and_Queries_by_Language.ipynb +++ b/tutorials/32_Classifying_Documents_and_Queries_by_Language.ipynb @@ -32,18 +32,6 @@ "In the last section, you'll build a multi-lingual RAG pipeline. The language of a question is detected, and only documents in that language are used to generate the answer. For this section, the [`TextLanguageRouter`](https://docs.haystack.deepset.ai/docs/textlanguagerouter) will come in handy.\n" ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "oBa4Q25cGTr6" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)" - ] - }, { "cell_type": "markdown", "metadata": { diff --git a/tutorials/33_Hybrid_Retrieval.ipynb b/tutorials/33_Hybrid_Retrieval.ipynb index d8c1ee4..95ec913 100644 --- a/tutorials/33_Hybrid_Retrieval.ipynb +++ b/tutorials/33_Hybrid_Retrieval.ipynb @@ -28,18 +28,6 @@ "There are many cases when a simple keyword-based approaches like BM25 performs better than a dense retrieval (for example in a specific domain like healthcare) because a dense model needs to be trained on data. For more details about Hybrid Retrieval, check out [Blog Post: Hybrid Document Retrieval](https://haystack.deepset.ai/blog/hybrid-retrieval)." ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "ITs3WTT5lXQT" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/setting-the-log-level)" - ] - }, { "cell_type": "markdown", "metadata": { @@ -571,4 +559,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/tutorials/34_Extractive_QA_Pipeline.ipynb b/tutorials/34_Extractive_QA_Pipeline.ipynb index 0d6cf15..64cba2f 100644 --- a/tutorials/34_Extractive_QA_Pipeline.ipynb +++ b/tutorials/34_Extractive_QA_Pipeline.ipynb @@ -29,18 +29,6 @@ "To get data into the extractive pipeline, you'll also build an indexing pipeline to ingest the [Wikipedia pages of Seven Wonders of the Ancient World dataset](https://en.wikipedia.org/wiki/Wonders_of_the_World)." ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "eF_hnatJUEHq" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/logging)" - ] - }, { "cell_type": "markdown", "metadata": { diff --git a/tutorials/35_Evaluating_RAG_Pipelines.ipynb b/tutorials/35_Evaluating_RAG_Pipelines.ipynb index bda65d2..291d395 100644 --- a/tutorials/35_Evaluating_RAG_Pipelines.ipynb +++ b/tutorials/35_Evaluating_RAG_Pipelines.ipynb @@ -52,18 +52,6 @@ "\n" ] }, - { - "cell_type": "markdown", - "metadata": { - "id": "QXjVlbPiO-qZ" - }, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/setting-the-log-level)" - ] - }, { "cell_type": "markdown", "metadata": { @@ -382,7 +370,7 @@ "\n", "In this example, we'll be using:\n", "- [`InMemoryEmbeddingRetriever`](https://docs.haystack.deepset.ai/docs/inmemoryembeddingretriever) which will get the relevant documents to the query.\n", - "- [`OpenAIChatGenerator`](https://docs.haystack.deepset.ai/docs/OpenAIChatGenerator) to generate answers to queries. You can replace `OpenAIChatGenerator` in your pipeline with another `ChatGenerator`. Check out the full list of generators [here](https://docs.haystack.deepset.ai/docs/generators)." + "- [`OpenAIChatGenerator`](https://docs.haystack.deepset.ai/docs/openaichatgenerator) to generate answers to queries. You can replace `OpenAIChatGenerator` in your pipeline with another `ChatGenerator`. Check out the full list of generators [here](https://docs.haystack.deepset.ai/docs/generators)." ] }, { diff --git a/tutorials/40_Building_Chat_Application_with_Function_Calling.ipynb b/tutorials/40_Building_Chat_Application_with_Function_Calling.ipynb index 92672b0..c0c05c8 100644 --- a/tutorials/40_Building_Chat_Application_with_Function_Calling.ipynb +++ b/tutorials/40_Building_Chat_Application_with_Function_Calling.ipynb @@ -27,7 +27,7 @@ "\n", "📚 Useful Sources:\n", "* [OpenAIChatGenerator Docs](https://docs.haystack.deepset.ai/docs/openaichatgenerator)\n", - "* [OpenAIChatGenerator API Reference](https://docs.haystack.deepset.ai/reference/generator-api#openaichatgenerator)\n", + "* [OpenAIChatGenerator API Reference](https://docs.haystack.deepset.ai/reference/generators-api#openaichatgenerator)\n", "* [🧑‍🍳 Cookbook: Function Calling with OpenAIChatGenerator](https://github.com/deepset-ai/haystack-cookbook/blob/main/notebooks/function_calling_with_OpenAIChatGenerator.ipynb)\n", "\n", "[OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling) connects large language models to external tools. By providing a `tools` list with functions and their specifications to the OpenAI API calls, you can easily build chat assistants that can answer questions by calling external APIs or extract structured information from text.\n", diff --git a/tutorials/42_Sentence_Window_Retriever.ipynb b/tutorials/42_Sentence_Window_Retriever.ipynb index 812408c..c9c8e4e 100644 --- a/tutorials/42_Sentence_Window_Retriever.ipynb +++ b/tutorials/42_Sentence_Window_Retriever.ipynb @@ -24,17 +24,6 @@ "`SentenceWindowRetriever(document_store=doc_store, window_size=2)`" ] }, - { - "cell_type": "markdown", - "id": "784caaa2", - "metadata": {}, - "source": [ - "\n", - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration#enabling-the-gpu-in-colab)\n" - ] - }, { "cell_type": "markdown", "id": "98c2f9d3", diff --git a/tutorials/44_Creating_Custom_SuperComponents.ipynb b/tutorials/44_Creating_Custom_SuperComponents.ipynb index b298399..092a0ed 100644 --- a/tutorials/44_Creating_Custom_SuperComponents.ipynb +++ b/tutorials/44_Creating_Custom_SuperComponents.ipynb @@ -10,7 +10,7 @@ "\n", "- **Level**: Intermediate\n", "- **Time to complete**: 20 minutes\n", - "- **Concepts and Components Used**: [`@super_component`](https://docs.haystack.deepset.ai/docs/supercomponents), [`Pipeline`](https://docs.haystack.deepset.ai/docs/pipeline), [`DocumentJoiner`](https://docs.haystack.deepset.ai/docs/documentjoiner), [`SentenceTransformersTextEmbedder`](https://docs.haystack.deepset.ai/docs/sentencetransformerstextembedder), [`InMemoryBM25Retriever`](https://docs.haystack.deepset.ai/docs/inmemorybm25retriever), [`InMemoryEmbeddingRetriever`](https://docs.haystack.deepset.ai/docs/inmemoryembeddingretriever), [`TransformersSimilarityRanker`](https://docs.haystack.deepset.ai/docs/transformerssimilarityranker)\n", + "- **Concepts and Components Used**: [`@super_component`](https://docs.haystack.deepset.ai/docs/supercomponents), [`Pipeline`](https://docs.haystack.deepset.ai/docs/pipelines), [`DocumentJoiner`](https://docs.haystack.deepset.ai/docs/documentjoiner), [`SentenceTransformersTextEmbedder`](https://docs.haystack.deepset.ai/docs/sentencetransformerstextembedder), [`InMemoryBM25Retriever`](https://docs.haystack.deepset.ai/docs/inmemorybm25retriever), [`InMemoryEmbeddingRetriever`](https://docs.haystack.deepset.ai/docs/inmemoryembeddingretriever), [`TransformersSimilarityRanker`](https://docs.haystack.deepset.ai/docs/transformerssimilarityranker)\n", "- **Goal**: After completing this tutorial, you'll have learned how to create custom SuperComponents using the `@super_component` decorator to simplify complex pipelines and make them reusable as components." ] }, diff --git a/tutorials/template.ipynb b/tutorials/template.ipynb index 9970a3a..8db984c 100644 --- a/tutorials/template.ipynb +++ b/tutorials/template.ipynb @@ -21,16 +21,6 @@ "*Here provide a short description of the tutorial. What does it teach? What's its expected outcome?*" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Preparing the Colab Environment\n", - "\n", - "- [Enable GPU Runtime in Colab](https://docs.haystack.deepset.ai/docs/enabling-gpu-acceleration#enabling-the-gpu-in-colab)\n", - "- [Set logging level to INFO](https://docs.haystack.deepset.ai/docs/log-level)" - ] - }, { "cell_type": "markdown", "metadata": {},