Build LangChain Applications using Vertex AI: laboratório com desafio avaliações
431 avaliações
SHIVANK U. · Revisado há about 1 year
YERRAM A. · Revisado há about 1 year
Maurizio . · Revisado há about 1 year
He H. · Revisado há about 1 year
Alan J. · Revisado há about 1 year
Salman A. · Revisado há about 1 year
Vikas S. · Revisado há about 1 year
Michal R. · Revisado há about 1 year
S Narendra D. · Revisado há about 1 year
PADMA M. · Revisado há about 1 year
Ashish W. · Revisado há about 1 year
Muhammad H. · Revisado há about 1 year
Lakshit J. · Revisado há about 1 year
Working by marcodelmart.com
Marco Flavio D. · Revisado há about 1 year
Thanks
El H. · Revisado há about 1 year
jupyter labs keep locking up and it took almost to the end of the timer to get this to finish
Joey G. · Revisado há about 1 year
Seakingretsu G. · Revisado há about 1 year
Raghu D. · Revisado há about 1 year
Task 2 Sose not work with; text_splitter = INSERT_CLASSNAME(chunk_size=10000, chunk_overlap=1000) docs = text_splitter.INSERT_METHOD_NAME(documents) print(f"# of documents = {len(docs)}") instead need to use text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000, chunk_overlap=1000) # Assuming 'documents' contains Document objects, extract the text content text_contents = [doc.page_content for doc in documents if hasattr(doc, 'page_content')] # Split each document's text into smaller chunks docs = [] for text in text_contents: split_docs = text_splitter.create_documents([text]) # Split each document separately docs.extend(split_docs) print(f"# of documents = {len(docs)}")
Yoha G. · Revisado há about 1 year
Carlos G. · Revisado há about 1 year
Malvika V. · Revisado há about 1 year
Sameer D. · Revisado há about 1 year
vaibhav s. · Revisado há about 1 year
Saish B. · Revisado há about 1 year
Jacek S. · Revisado há about 1 year
Não garantimos que as avaliações publicadas sejam de consumidores que compraram ou usaram os produtos. As avaliações não são verificadas pelo Google.