GitbookLoader#
- class langchain_community.document_loaders.gitbook.GitbookLoader(
- web_page: str,
- load_all_paths: bool = False,
- base_url: str | None = None,
- content_selector: str = 'main',
- continue_on_failure: bool = False,
- show_progress: bool = True,
- *,
- sitemap_url: str | None = None,
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Initialize with web page and whether to load all paths.
- Parameters:
web_page (str) β The web page to load or the starting point from where relative paths are discovered.
load_all_paths (bool) β If set to True, all relative paths in the navbar are loaded instead of only web_page.
base_url (str | None) β If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.
content_selector (str) β The CSS selector for the content to load. Defaults to βmainβ.
continue_on_failure (bool) β whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False
show_progress (bool) β whether to show a progress bar while loading. Default: True
sitemap_url (str | None) β Custom sitemap URL to use when load_all_paths is True. Defaults to β{base_url}/sitemap.xmlβ.
Attributes
web_path
Methods
__init__
(web_page[,Β load_all_paths,Β ...])Initialize with web page and whether to load all paths.
Async lazy load text from the url(s) in web_path.
aload
()ascrape_all
(urls[,Β parser])Async fetch all urls, then return soups for all results.
fetch_all
(urls)Fetch all urls concurrently with rate limiting.
Fetch text from one single GitBook page.
load
()Load data into Document objects.
load_and_split
([text_splitter])Load Documents and split into chunks.
scrape
([parser])Scrape data from webpage and return it in BeautifulSoup format.
scrape_all
(urls[,Β parser])Fetch all urls, then return soups for all results.
- __init__(
- web_page: str,
- load_all_paths: bool = False,
- base_url: str | None = None,
- content_selector: str = 'main',
- continue_on_failure: bool = False,
- show_progress: bool = True,
- *,
- sitemap_url: str | None = None,
Initialize with web page and whether to load all paths.
- Parameters:
web_page (str) β The web page to load or the starting point from where relative paths are discovered.
load_all_paths (bool) β If set to True, all relative paths in the navbar are loaded instead of only web_page.
base_url (str | None) β If load_all_paths is True, the relative paths are appended to this base url. Defaults to web_page.
content_selector (str) β The CSS selector for the content to load. Defaults to βmainβ.
continue_on_failure (bool) β whether to continue loading the sitemap if an error occurs loading a url, emitting a warning instead of raising an exception. Setting this to True makes the loader more robust, but also may result in missing data. Default: False
show_progress (bool) β whether to show a progress bar while loading. Default: True
sitemap_url (str | None) β Custom sitemap URL to use when load_all_paths is True. Defaults to β{base_url}/sitemap.xmlβ.
- async alazy_load() AsyncIterator[Document] #
Async lazy load text from the url(s) in web_path.
- Return type:
AsyncIterator[Document]
- aload() List[Document] #
Deprecated since version 0.3.14: See API reference for updated usage: https://python.langchain.com/api_reference/community/document_loaders/langchain_community.document_loaders.web_base.WebBaseLoader.html It will not be removed until langchain-community==1.0.
Load text from the urls in web_path async into Documents.
- Return type:
List[Document]
- async ascrape_all(
- urls: List[str],
- parser: str | None = None,
Async fetch all urls, then return soups for all results.
- Parameters:
urls (List[str])
parser (str | None)
- Return type:
List[Any]
- async fetch_all(
- urls: List[str],
Fetch all urls concurrently with rate limiting.
- Parameters:
urls (List[str])
- Return type:
Any
- lazy_load() Iterator[Document] [source]#
Fetch text from one single GitBook page.
- Return type:
Iterator[Document]
- load_and_split(
- text_splitter: TextSplitter | None = None,
Load Documents and split into chunks. Chunks are returned as Documents.
Do not override this method. It should be considered to be deprecated!
- Parameters:
text_splitter (Optional[TextSplitter]) β TextSplitter instance to use for splitting documents. Defaults to RecursiveCharacterTextSplitter.
- Returns:
List of Documents.
- Return type:
list[Document]
- scrape(
- parser: str | None = None,
Scrape data from webpage and return it in BeautifulSoup format.
- Parameters:
parser (str | None)
- Return type:
Any
- scrape_all(
- urls: List[str],
- parser: str | None = None,
Fetch all urls, then return soups for all results.
- Parameters:
urls (List[str])
parser (str | None)
- Return type:
List[Any]
Examples using GitbookLoader