关于“Build LangChain Applications using Vertex AI:挑戰實驗室”的评价
正在加载…
未找到任何结果。

在 Google Cloud 控制台中运用您的技能

关于“Build LangChain Applications using Vertex AI:挑戰實驗室”的评价

评论

SHIVANK U. · 评论about 1 year之前

YERRAM A. · 评论about 1 year之前

Maurizio . · 评论about 1 year之前

He H. · 评论about 1 year之前

Alan J. · 评论about 1 year之前

Salman A. · 评论about 1 year之前

Vikas S. · 评论about 1 year之前

Michal R. · 评论about 1 year之前

S Narendra D. · 评论about 1 year之前

PADMA M. · 评论about 1 year之前

Ashish W. · 评论about 1 year之前

Muhammad H. · 评论about 1 year之前

Lakshit J. · 评论about 1 year之前

Working by marcodelmart.com

Marco Flavio D. · 评论about 1 year之前

Thanks

El H. · 评论about 1 year之前

jupyter labs keep locking up and it took almost to the end of the timer to get this to finish

Joey G. · 评论about 1 year之前

Seakingretsu G. · 评论about 1 year之前

Raghu D. · 评论about 1 year之前

Task 2 Sose not work with; text_splitter = INSERT_CLASSNAME(chunk_size=10000, chunk_overlap=1000) docs = text_splitter.INSERT_METHOD_NAME(documents) print(f"# of documents = {len(docs)}") instead need to use text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000, chunk_overlap=1000) # Assuming 'documents' contains Document objects, extract the text content text_contents = [doc.page_content for doc in documents if hasattr(doc, 'page_content')] # Split each document's text into smaller chunks docs = [] for text in text_contents: split_docs = text_splitter.create_documents([text]) # Split each document separately docs.extend(split_docs) print(f"# of documents = {len(docs)}")

Yoha G. · 评论about 1 year之前

Carlos G. · 评论about 1 year之前

Malvika V. · 评论about 1 year之前

Sameer D. · 评论about 1 year之前

vaibhav s. · 评论about 1 year之前

Saish B. · 评论about 1 year之前

Jacek S. · 评论about 1 year之前

我们无法确保发布的评价来自已购买或已使用产品的消费者。评价未经 Google 核实。