[go: up one dir, main page]

Web Scrapers for Windows

View 37 business solutions
  • MongoDB Atlas runs apps anywhere Icon
    MongoDB Atlas runs apps anywhere

    Deploy in 115+ regions with the modern database for every enterprise.

    MongoDB Atlas gives you the freedom to build and run modern applications anywhere—across AWS, Azure, and Google Cloud. With global availability in over 115 regions, Atlas lets you deploy close to your users, meet compliance needs, and scale with confidence across any geography.
    Start Free
  • Ango Hub | All-in-one data labeling platform Icon
    Ango Hub | All-in-one data labeling platform

    For AI teams and Computer Vision team in organizations of all size

    AI-Assisted features of the Ango Hub will automate your AI data workflows to improve data labeling efficiency and model RLHF, all while allowing domain experts to focus on providing high-quality data.
    Learn More
  • 1
    A toolkit for crawling information from web pages by combining different kinds of "actions". Actions are simple operations such as navigation to a specified url or extraction of text from the html. Also available is a graphic user interface.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Gerapy

    Gerapy

    Distributed Crawler Management Framework Based on Scrapy

    Distributed Crawler Management Framework Based on Scrapy, Scrapyd, Scrapyd-Client, Scrapyd-API, Django and Vue.js. Someone who has worked as a crawler with Python may use Scrapy. Scrapy is indeed a very powerful crawler framework. It has high crawling efficiency and good scalability. It is basically a necessary tool for developing crawlers using Python. If you use Scrapy as a crawler, then of course we can use our own host to crawl when crawling, but when the crawl is very large, we can’t run the crawler on our own machine, a good one. The method is to deploy Scrapy to a remote server for execution. At this time, you might use Scrapyd. With it, we only need to install Scrapyd on the remote server and start the service. We can deploy the Scrapy project we wrote. Go to the remote host. In addition, Scrapyd provides a variety of operations API, which gives you free control over the operation of the Scrapy project.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    GitGet

    GitGet

    Ever wanted to download only a part of a Git repository.

    Ever wanted to download only a part of a Git repository. Just paste the URL of the repo you want to download and sit back and enjoy. This simple java application makes use of Web Scraping and downloads only those files you need, thus helping you save your precious bandwidth and space.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    Goutte

    Goutte

    Goutte, a simple PHP Web Scraper

    Goutte is a screen scraping and web crawling library for PHP. Goutte provides a nice API to crawl websites and extract data from the HTML/XML responses. Goutte depends on PHP 7.1+. Add fabpot/goutte as a require dependency in your composer.json file. Create a Goutte Client instance (which extends Symfony\Component\BrowserKit\HttpBrowser). Make requests with the request() method. The method returns a Crawler object (Symfony\Component\DomCrawler\Crawler). To use your own HTTP settings, you may create and pass an HttpClient instance to Goutte. For example, to add a 60 second request timeout. Read the documentation of the BrowserKit, DomCrawler, and HttpClient Symfony Components for more information about what you can do with Goutte. Goutte is a thin wrapper around the following Symfony Components: BrowserKit, CssSelector, DomCrawler, and HttpClient.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Captain Compliance - Data Privacy and Compliance Software Icon
    Captain Compliance - Data Privacy and Compliance Software

    Privacy Compliance Software - Avoid Fines and Prevent Lawsuits

    Captain Compliance handles your data privacy requirements so you can be privacy compliant. No more compliance stress, stop stressing over regulatory risks – just privacy protection managed by experts. Our user-friendly platform backed by privacy professionals simplifies the process of navigating regulations, giving your customers transparent choices, and building essential trust for your organization.
    Learn More
  • 5
    Grab Framework Project

    Grab Framework Project

    Web Scraping Framework

    Grab is a python framework for building web scrapers. With Grab you can build web scrapers of various complexity, from simple 5-line scripts to complex asynchronous website crawlers processing millions of web pages. Grab provides an API for performing network requests and for handling the received content e.g. interacting with DOM tree of the HTML document. The single request/response API that allows you to build network request, perform it and work with the received content. The API is built on top of urllib3 and lxml libraries. The Spider API to build asynchronous web crawlers. You write classes that define handlers for each type of network request. Each handler is able to spawn new network requests. Network requests are processed concurrently with a pool of asynchronous web sockets. Grab provides interface called Spider to develop multithreaded web-site scrapers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    HtmlClient provides an SGML/HTML/XHTML parser and connection client making web-spidering as easy for developers as actually surfing the web with a premade browser. Based on Apache's HttpClient.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    J-Obey is a Java Library/package, which allows people writing their own crawlers to have a stable Robots.txt parser, if you are writing a web crawler of some sort you can use J-Obey to take out the hassle of writing a Robots.txt parser/intrepreter.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    A Java implementation of a flexible and extensible web spider engine. Optional modules allow functionality to be added (searching dead links, testing the performance and scalability of a site, creating a sitemap, etc ..
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9

    Jamilsoft Blade Email Extractor

    A powerful and easy-to-use email extracting software

    Jamilsoft Blade is a powerful and easy-to-use email extracting software that can help you extract email addresses from a variety of sources, including websites, documents, and social media. With Jamilsoft Blade, you can quickly and easily find the email addresses you need, even if they are hidden or obscured.
    Downloads: 0 This Week
    Last Update:
    See Project
  • IT Asset Management (ITAM) Software Icon
    IT Asset Management (ITAM) Software

    Supercharge Your IT Assets, the Easy Way

    EZO AssetSonar is a comprehensive IT asset management platform that provides real-time visibility into your entire digital infrastructure. Track and optimize hardware, software, and license management to reduce risks, control IT spend, and improve compliance.
    Learn More
  • 10
    Spider web scritto in java che consente un utilizzo sia come applicazione stand alone, sia come core di altre applicazioni che sfruttino le sue funzionalità.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    a minimal Java web crawler
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Letterboxd Recommendations

    Letterboxd Recommendations

    Scraping publicly-accessible Letterboxd data for movie recommendations

    Scraping publicly-accessible Letterboxd data and creating a movie recommendation model with it that can generate recommendations when provided with a Letterboxd username. A user's "star" ratings are scraped from their Letterboxd profile and assigned numerical ratings from 1 to 10 (accounting for half stars). Their ratings are then combined with a sample of ratings from the top 4000 most active users on the site to create a collaborative filtering recommender model using singular value decomposition (SVD). All movies in the full dataset that the user has not rated are run through the model for predicted scores and the items with the top predicted scores are returned. Due to constraints in time and computing power, the maximum sample size that a user is allowed to select is 500,000 samples, though there are over five million ratings in the full dataset from the top 4000 Letterboxd users alone.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    A web-spider, based on the availability of URL APIs to most web based databases, mapping web pages to two dimensional FreeMind mind-maps. Mapp.it runs locally like a web application and uses a small footprint CherryPy webserver.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    MechanicalSoup

    MechanicalSoup

    A Python library for automating interaction with websites

    A Python library for automating interaction with websites. MechanicalSoup automatically stores and sends cookies, follows redirects, and can follow links and submit forms. It doesn't do JavaScript. MechanicalSoup was created by M Hickford, who was a fond user of the Mechanize library. Unfortunately, Mechanize was incompatible with Python 3 until 2019 and its development stalled for several years. MechanicalSoup provides a similar API, built on Python giants Requests (for HTTP sessions) and BeautifulSoup (for document navigation). Since 2017 it is a project actively maintained by a small team.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Methanol is a scriptable multi-purpose web crawling system with an extensible configuration system and speed-optimized architectural design. Methabot is the web crawler of Methanol.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    MusicalGalaxy

    MusicalGalaxy

    Shows the complex connection between musicians and their pupils

    A galaxy of musicians, connected from pupil to teacher in a complex family tree, displayed in an elegant, dynamic, interactive way. A demonstration can be found here: https://v.redd.it/wc5dhs12m7a51/DASH_720?source=fallback The size and colour of the stars depend upon the number of connections the musician made over their lifetime. Those who taught few reside around the edges, whilst the greatest teachers cluster around the centre. Everything is connected. The brightest star at the moment is Nadia Boulanger, who single-handedly changed the face of modern music, teaching musicians by the likes of Philip Glass, Daniel Barenboim and Aaron Copland. Credit to OrigamiDrag0n, 2020. MIT license reserved for all code. UPDATE - clicking the bubbles now opens the webpage of the chosen composer. Error messages thrown by this update are currently being investigated.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    NightCrawler is a multithreaded web spider which uses MIME types to download files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    Nomad is tiny but efficient search engine and web crawler. This works very good for searching with in the set of corporate websites on internet and/or intranet's HTML documents or knowledge repositories.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19

    PGBuild

    Compile your mobile web pages into mobile aps via build.phonegap.com

    PGbuild is a Phonegap development system that automates the development process by connecting your CMS/web server with the online service [Phonegap Build](http://build.phonegap.com). PGBuild is essentially a web spider that make off-line versions of web pages. The off-line version is zippped and send to the Phonegap Build service. The spider is controlled by a project file that sets the rules for the spider and the options for the phonebap build service. You may create and manage your phonegap project source files manually on your webserver or use PGBuild to connect to a CMS system to extract content. PGBuild is managed from a small widget that you may use your self or integrate into a CMS system.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Perl Web Scraping Project

    Perl Web Scraping Project

    Perl Web Scraping Project

    Web scraping (web harvesting or web data extraction) is data scraping used for extracting data from websites.[1] Web scraping software may access the World Wide Web directly using the Hypertext Transfer Protocol, or through a web browser. While web scraping can be done manually by a software user, the term typically refers to automated processes implemented using a bot or web crawler. It is a form of copying, in which specific data is gathered and copied from the web, typically into a central local database or spreadsheet, for later retrieval or analysis. Web scraping a web page involves fetching it and extracting from it.[1][2] Fetching is the downloading of a page (which a browser does when you view the page). Therefore, web crawling is a main component of web scraping, to fetch pages for later processing. Once fetched, then extraction can take place. The content of a page may be parsed, searched, reformatted, its data copied into a spreadsheet, and so on.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Provas aplicadas - Concursos

    Provas aplicadas - Concursos

    Baixar provas aplicadas de bancas

    Um aplicativo desktop que permite o download de provas aplicadas em concursos públicos no Brasil. O aplicativo oferece filtros personalizáveis para pesquisa de provas e gabaritos de acordo com a banca organizadora, ano e nível de escolaridade.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22

    Python Crawler Library

    Python Web Crawler Library

    A simple library for crawling the web. This library will give you the ability to create macros for crawling web site and preforming simple actions like preforming "log in" and other simple actions in web sites.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23

    Rapid Reference

    An extension that allows for hassle-free website citation/referencing.

    Please do not distribute with the goal of selling my program. How to attach to your Chrome/Edge/Brave etc Browser: 1. Download the extension(rapidreference.zip) 2. Extract 3. Go to chrome://extensions if on Chrome, or navigate to your extension management setting in your browser 4. Enable developer mode (usually top right) 5. Add unpacked extension 6. Choose the extracted extension's folder 7. There you go! How to use: 1. Start a session in the panel of the extension 2. Do required research/website navigation 3. You can check the preview of the cited references in the panel 4. You can copy the citation list to the clipboard through simply clicking the "Copy Citation" button 5. You're Done! Thanks for using my software 😊
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    Yet another web crawler? Yes, but this ones uses the full power of regular expressions to accept or reject, examine or ignore, save or refuse pages. You also use MIME types to do all this. Powerful and flexible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    Requests-HTML

    Requests-HTML

    Pythonic HTML Parsing for Humans

    This library intends to make parsing HTML (e.g. scraping the web) as simple and intuitive as possible. When using this library you automatically get full JavaScript support! (Using Chromium, thanks to puppeteer) CSS Selectors (a.k.a jQuery-style, thanks to PyQuery). XPath Selectors, for the faint of heart. Mocked user-agent (like a real web browser). Automatic following of redirects. Connection–pooling and cookie persistence. The Requests experience you know and love, with magical parsing abilities, and async support. The rest of the code operates the same way as the synchronous version except that results is a list containing multiple response objects however the same basic processes can be applied as above to extract the data you want.
    Downloads: 0 This Week
    Last Update:
    See Project