botasaurus

botasaurus

全能Web爬虫框架助力高效开发

Botasaurus是一款功能全面的Web爬虫框架,可帮助开发者用更少的时间和代码构建高效爬虫。它提供人性化的浏览器驱动、易于并行化的API、缓存和数据清理等功能,能有效绕过反爬虫机制。该框架还支持快速创建带UI的爬虫,大幅简化了开发流程,是构建高效Web爬虫的理想工具。

Botasaurus网络爬虫自动化框架PythonGithub开源项目
<p align="center"> <img src="https://raw.githubusercontent.com/omkarcloud/botasaurus/master/images/mascot.png" alt="botasaurus" /> </p> <div align="center" style="margin-top: 0;"> <h1>🤖 Botasaurus 🤖</h1> </div> <h3 align="center"> The All in One Framework to build Awesome Scrapers. </h3> <p align="center"> <b>The web has evolved. Finally, web scraping has too.</b> </p> <p align="center"> <img src="https://views.whatilearened.today/views/github/omkarcloud/botasaurus.svg" width="80px" height="28px" alt="View" /> </p> <p align="center"> <a href="https://gitpod.io/#https://github.com/omkarcloud/botasaurus-starter"> <img alt="Run in Gitpod" src="https://gitpod.io/button/open-in-gitpod.svg" /> </a> </p>

A new version has been released, with performance boost. To update please run python -m pip install bota botasaurus botasaurus-api botasaurus-requests botasaurus-driver bota botasaurus-proxy-authentication botasaurus-server --upgrade.

🐿️ Botasaurus In a Nutshell

How wonderful that of all the web scraping tools out there, you chose to learn about Botasaurus. Congratulations!

And now that you are here, you are in for an exciting, unusual and rewarding journey that will make your web scraping life a lot, lot easier.

Now, let me tell you in bullet points about Botasaurus. (Because as per the marketing gurus, YOU as a member of Developer Tribe have a VERY short attention span.)

So, what is Botasaurus?

Botasaurus is an all-in-one web scraping framework that enables you to build awesome scrapers in less time, less code, and with more fun.

A Web Scraping Magician has put all his web scraping experience and best practices into Botasaurus to save you hundreds of hours of Development Time!

Now, for the magical powers awaiting you after learning Botasaurus:

  • Convert any Web Scraper to a UI-based Scraper in minutes, which will make your Customer sing praises of you.

pro-gmaps-demo

  • In terms of humaneness, what Superman is to Man, Botasaurus is to Selenium and Playwright. Easily pass every (Yes E-V-E-R-Y) bot test, no need to spend time finding ways to access a website.

solve-bot-detection

  • Save up to 97%, yes 97% on browser proxy costs by using browser-based fetch requests.

  • Easily save hours of Development Time with easy parallelization, profiles, extensions, and proxy configuration. Botasaurus makes asynchronous, parallel scraping a child's play.

  • Use Caching, Sitemap, Data cleaning, and other utilities to save hours of time spent in writing and debugging code.

  • Easily scale your scraper to multiple machines with Kubernetes, and get your data faster than ever.

And those are just the highlights. I Mean!

There is so much more to Botasaurus, that you will be amazed at how much time you will save with it.

🚀 Getting Started with Botasaurus

Let's dive right in with a straightforward example to understand Botasaurus.

In this example, we will go through the steps to scrape the heading text from https://www.omkar.cloud/.

Botasaurus in action

Step 1: Install Botasaurus

First things first, you need to install Botasaurus. Run the following command in your terminal:

python -m pip install botasaurus

Step 2: Set Up Your Botasaurus Project

Next, let's set up the project:

  1. Create a directory for your Botasaurus project and navigate into it:
mkdir my-botasaurus-project cd my-botasaurus-project code . # This will open the project in VSCode if you have it installed

Step 3: Write the Scraping Code

Now, create a Python script named main.py in your project directory and paste the following code:

from botasaurus.browser import browser, Driver @browser def scrape_heading_task(driver: Driver, data): # Visit the Omkar Cloud website driver.get("https://www.omkar.cloud/") # Retrieve the heading element's text heading = driver.get_text("h1") # Save the data as a JSON file in output/scrape_heading_task.json return { "heading": heading } # Initiate the web scraping task scrape_heading_task()

Let's understand this code:

  • We define a custom scraping task, scrape_heading_task, decorated with @browser:
@browser def scrape_heading_task(driver: Driver, data):
  • Botasaurus automatically provides an Humane Driver to our function:
def scrape_heading_task(driver: Driver, data):
  • Inside the function, we:
    • Visit Omkar Cloud
    • Extract the heading text
    • Return the data to be automatically saved as scrape_heading_task.json by Botasaurus:
driver.get("https://www.omkar.cloud/") heading = driver.get_text("h1") return {"heading": heading}
  • Finally, we initiate the scraping task:
# Initiate the web scraping task scrape_heading_task()

Step 4: Run the Scraping Task

Time to run it:

python main.py

After executing the script, it will:

  • Launch Google Chrome
  • Visit omkar.cloud
  • Extract the heading text
  • Save it automatically as output/scrape_heading_task.json.

Botasaurus in action

Now, let's explore another way to scrape the heading using the request module. Replace the previous code in main.py with the following:

from botasaurus.request import request, Request from botasaurus.soupify import soupify @request def scrape_heading_task(request: Request, data): # Visit the Omkar Cloud website response = request.get("https://www.omkar.cloud/") # Create a BeautifulSoup object soup = soupify(response) # Retrieve the heading element's text heading = soup.find('h1').get_text() # Save the data as a JSON file in output/scrape_heading_task.json return { "heading": heading } # Initiate the web scraping task scrape_heading_task()

In this code:

  • We scrape the HTML using request, which is specifically designed for making browser-like humane requests.
  • Next, we parse the HTML into a BeautifulSoup object using soupify() and extract the heading.

Step 5: Run the Scraping Task (which makes Humane HTTP Requests)

Finally, run it again:

python main.py

This time, you will observe the exact same result as before, but instead of opening a whole Browser, we are making browser-like humane HTTP requests.

💡 Understanding Botasaurus

What is Botasaurus Driver, And Why should I use it over Selenium and Playwright?

Botasaurus Driver is a web automation driver like Selenium, and the single most important reason to use it is because it is truly humane, and you will not, and I repeat NOT, have any issues with accessing any website.

Plus, it is super fast to launch and use, and the API is designed by and for web scrapers, and you will love it.

How do I access Cloudflare-protected pages using Botasaurus?

Cloudflare is the most popular protection system on the web. So, let's see how Botasaurus can help you solve various Cloudflare challenges.

Connection Challenge

This is the single most popular challenge and requires making a browser-like connection with appropriate headers. It's commonly used for:

  • Product Pages
  • Blog Pages
  • Search Result Pages

Example Page: https://www.g2.com/products/github/reviews

What Works?

  • Visiting the website via Google Referrer (which makes is seems as if the user has arrived from google search).
from botasaurus.browser import browser, Driver @browser def scrape_heading_task(driver: Driver, data): # Visit the website via Google Referrer driver.google_get("https://www.g2.com/products/github/reviews") driver.prompt() heading = driver.get_text('.product-head__title [itemprop="name"]') return heading scrape_heading_task()
  • Use the request module. The Request Object is smart and, by default, visits any link with a Google Referrer. Although it works, you will need to use retries.
from botasaurus.request import request, Request @request(max_retry=10) def scrape_heading_task(request: Request, data): response = request.get('https://www.g2.com/products/github/reviews') print(response.status_code) response.raise_for_status() return response.text scrape_heading_task()

JS with Captcha Challenge

This challenge requires performing JS computations that differentiate a Chrome controlled by Selenium/Puppeteer/Playwright from a real Chrome. It also involves solving a Captcha. It's used to for pages which are rarely but sometimes visited by people, like:

  • 5th Review page
  • Auth pages

Example Page: https://www.g2.com/products/github/reviews.html?page=5&product_id=github

What Does Not Work?

Using @request does not work because although it can make browser-like HTTP requests, it cannot run JavaScript to solve the challenge.

What Works?

Pass the bypass_cloudflare=True argument to the google_get method.

from botasaurus.browser import browser, Driver @browser def scrape_heading_task(driver: Driver, data): driver.google_get("https://www.g2.com/products/github/reviews.html?page=5&product_id=github", bypass_cloudflare=True) driver.prompt() heading = driver.get_text('.product-head__title [itemprop="name"]') return heading scrape_heading_task()

What are the benefits of a UI Scraper?

Here are some benefits of creating a scraper with a user interface:

  • Simplify your scraper usage for customers, eliminating the need to teach them how to modify and run your code.
  • Protect your code by hosting the scraper on the web and offering a monthly subscription, rather than providing full access to your code. This approach:
    • Safeguards your Python code from being copied and reused, increasing your customer's lifetime value.
    • Generate monthly recurring revenue via subscription from your customers, surpassing a one-time payment.
  • Enable sorting, filtering, and downloading of data in various formats (JSON, Excel, CSV, etc.).
  • Provide access via a REST API for seamless integration.
  • Create a polished frontend, backend, and API integration with minimal code.

How to run a UI-based scraper?

Let's run the Botasaurus Starter Template (the recommended template for greenfield Botasaurus projects), which scrapes the heading of the provided link by following these steps:

  1. Clone the Starter Template:

    git clone https://github.com/omkarcloud/botasaurus-starter my-botasaurus-project
    cd my-botasaurus-project
    
  2. Install dependencies (will take a few minutes):

    python -m pip install -r requirements.txt
    python run.py install
    
  3. Run the scraper:

    python run.py
    

Your browser will automatically open up at http://localhost:3000/. Then, enter the link you want to scrape (e.g., https://www.omkar.cloud/) and click on the Run Button.

starter-scraper-demo

After some seconds, the data will be scraped. starter-scraper-demo-result

Visit http://localhost:3000/output to see all the tasks you have started.

starter-scraper-demo-tasks

Go to http://localhost:3000/about to see the rendered README.md file of the project.

starter-scraper-demo-readme

Finally, visit http://localhost:3000/api-integration to see how to access the Scraper via API.

starter-scraper-demo-api

The API Documentation is generated dynamically based on your Scraper's Inputs, Sorts, Filters, etc., and is unique to your Scraper.

So, whenever you need to run the Scraper via API, visit this tab and copy the code specific to your Scraper.

How to create a UI Scraper using Botasaurus?

Creating a UI Scraper with Botasaurus is a simple 3-step process:

  1. Create your Scraper function
  2. Add the Scraper to the Server using 1 line of code
  3. Define the input controls for the Scraper

To understand these steps, let's go through the code of the Botasaurus Starter Template that you just ran.

Step 1: Create the Scraper Function

In src/scrape_heading_task.py, we define a scraping function which basically does the following:

  1. Receives a data object and extracts the "link".
  2. Retrieves the HTML content of the webpage using the "link".
  3. Converts the HTML into a BeautifulSoup object.
  4. Locates the heading element, extracts its text content and returns it.
from botasaurus.request import request, Request from botasaurus.soupify import soupify @request def scrape_heading_task(request: Request, data): # Visit the Link response = request.get(data["link"]) # Create a BeautifulSoup object soup = soupify(response) # Retrieve the heading element's text heading = soup.find('h1').get_text() # Save the data as a JSON file in output/scrape_heading_task.json return { "heading": heading }

Step 2: Add the Scraper to the Server

In backend/scrapers.py, we:

  • Import our scraping function
  • Use Server.add_scraper() to register the scraper
from botasaurus_server.server import Server from src.scrape_heading_task import scrape_heading_task # Add the scraper to the server Server.add_scraper(scrape_heading_task)

Step 3: Define the Input Controls

In backend/inputs/scrape_heading_task.js we:

  • Define a getInput function that takes the controls parameter
  • Add a link input control to it
  • Use comments to enable intellisense in VSCode (Very Very Important)
/** * @typedef {import('../../frontend/node_modules/botasaurus-controls/dist/index').Controls} Controls */ /** * @param {Controls} controls */ function getInput(controls) { controls // Render a Link Input, which is required, defaults to "https://www.omkar.cloud/". .link('link', { isRequired: true, defaultValue: "https://www.omkar.cloud/" }) }

Above was a simple example; below is a real-world example with multi-text, number, switch, select, section, and other controls.

/** * @typedef {import('../../frontend/node_modules/botasaurus-controls/dist/index').Controls} Controls */ /** * @param {Controls} controls */ function getInput(controls) { controls .listOfTexts('queries', { defaultValue: ["Web Developers in Bangalore"], placeholder: "Web Developers in Bangalore", label: 'Search Queries', isRequired: true }) .section("Email and Social Links Extraction", (section) => { section.text('api_key', { placeholder: "2e5d346ap4db8mce4fj7fc112s9h26s61e1192b6a526af51n9", label: 'Email and Social Links Extraction API Key', helpText: 'Enter your API key to extract email addresses and social media links.', }) }) .section("Reviews Extraction", (section) => { section .switch('enable_reviews_extraction', { label: "Enable Reviews Extraction" }) .numberGreaterThanOrEqualToZero('max_reviews', { label: 'Max Reviews per Place (Leave empty to extract all reviews)', placeholder: 20, isShown: (data) => data['enable_reviews_extraction'], defaultValue: 20, }) .choose('reviews_sort', { label: "Sort Reviews By", isRequired: true, isShown: (data) => data['enable_reviews_extraction'], defaultValue: 'newest', options: [{ value: 'newest', label: 'Newest' }, { value: 'most_relevant', label: 'Most Relevant' }, { value: 'highest_rating', label: 'Highest Rating' }, { value: 'lowest_rating', label: 'Lowest Rating' }] }) }) .section("Language and Max Results", (section) => { section .addLangSelect() .numberGreaterThanOrEqualToOne('max_results', { placeholder: 100, label: 'Max Results per Search Query (Leave empty to extract all places)' }) }) .section("Geo Location", (section) => { section .text('coordinates', { placeholder:

编辑推荐精选

讯飞智文

讯飞智文

一键生成PPT和Word,让学习生活更轻松

讯飞智文是一个利用 AI 技术的项目,能够帮助用户生成 PPT 以及各类文档。无论是商业领域的市场分析报告、年度目标制定,还是学生群体的职业生涯规划、实习避坑指南,亦或是活动策划、旅游攻略等内容,它都能提供支持,帮助用户精准表达,轻松呈现各种信息。

AI办公办公工具AI工具讯飞智文AI在线生成PPTAI撰写助手多语种文档生成AI自动配图热门
讯飞星火

讯飞星火

深度推理能力全新升级,全面对标OpenAI o1

科大讯飞的星火大模型,支持语言理解、知识问答和文本创作等多功能,适用于多种文件和业务场景,提升办公和日常生活的效率。讯飞星火是一个提供丰富智能服务的平台,涵盖科技资讯、图像创作、写作辅助、编程解答、科研文献解读等功能,能为不同需求的用户提供便捷高效的帮助,助力用户轻松获取信息、解决问题,满足多样化使用场景。

热门AI开发模型训练AI工具讯飞星火大模型智能问答内容创作多语种支持智慧生活
Spark-TTS

Spark-TTS

一种基于大语言模型的高效单流解耦语音令牌文本到语音合成模型

Spark-TTS 是一个基于 PyTorch 的开源文本到语音合成项目,由多个知名机构联合参与。该项目提供了高效的 LLM(大语言模型)驱动的语音合成方案,支持语音克隆和语音创建功能,可通过命令行界面(CLI)和 Web UI 两种方式使用。用户可以根据需求调整语音的性别、音高、速度等参数,生成高质量的语音。该项目适用于多种场景,如有声读物制作、智能语音助手开发等。

Trae

Trae

字节跳动发布的AI编程神器IDE

Trae是一种自适应的集成开发环境(IDE),通过自动化和多元协作改变开发流程。利用Trae,团队能够更快速、精确地编写和部署代码,从而提高编程效率和项目交付速度。Trae具备上下文感知和代码自动完成功能,是提升开发效率的理想工具。

AI工具TraeAI IDE协作生产力转型热门
咔片PPT

咔片PPT

AI助力,做PPT更简单!

咔片是一款轻量化在线演示设计工具,借助 AI 技术,实现从内容生成到智能设计的一站式 PPT 制作服务。支持多种文档格式导入生成 PPT,提供海量模板、智能美化、素材替换等功能,适用于销售、教师、学生等各类人群,能高效制作出高品质 PPT,满足不同场景演示需求。

讯飞绘文

讯飞绘文

选题、配图、成文,一站式创作,让内容运营更高效

讯飞绘文,一个AI集成平台,支持写作、选题、配图、排版和发布。高效生成适用于各类媒体的定制内容,加速品牌传播,提升内容营销效果。

热门AI辅助写作AI工具讯飞绘文内容运营AI创作个性化文章多平台分发AI助手
材料星

材料星

专业的AI公文写作平台,公文写作神器

AI 材料星,专业的 AI 公文写作辅助平台,为体制内工作人员提供高效的公文写作解决方案。拥有海量公文文库、9 大核心 AI 功能,支持 30 + 文稿类型生成,助力快速完成领导讲话、工作总结、述职报告等材料,提升办公效率,是体制打工人的得力写作神器。

openai-agents-python

openai-agents-python

OpenAI Agents SDK,助力开发者便捷使用 OpenAI 相关功能。

openai-agents-python 是 OpenAI 推出的一款强大 Python SDK,它为开发者提供了与 OpenAI 模型交互的高效工具,支持工具调用、结果处理、追踪等功能,涵盖多种应用场景,如研究助手、财务研究等,能显著提升开发效率,让开发者更轻松地利用 OpenAI 的技术优势。

Hunyuan3D-2

Hunyuan3D-2

高分辨率纹理 3D 资产生成

Hunyuan3D-2 是腾讯开发的用于 3D 资产生成的强大工具,支持从文本描述、单张图片或多视角图片生成 3D 模型,具备快速形状生成能力,可生成带纹理的高质量 3D 模型,适用于多个领域,为 3D 创作提供了高效解决方案。

3FS

3FS

一个具备存储、管理和客户端操作等多种功能的分布式文件系统相关项目。

3FS 是一个功能强大的分布式文件系统项目,涵盖了存储引擎、元数据管理、客户端工具等多个模块。它支持多种文件操作,如创建文件和目录、设置布局等,同时具备高效的事件循环、节点选择和协程池管理等特性。适用于需要大规模数据存储和管理的场景,能够提高系统的性能和可靠性,是分布式存储领域的优质解决方案。

下拉加载更多