Thursday, 28 September 2017

Web Data Extraction

The Internet as we know today is a repository of information that can be accessed across geographical societies. In just over two decades, the Web has moved from a university curiosity to a fundamental research, marketing and communications vehicle that impinges upon the everyday life of most people in all over the world. It is accessed by over 16% of the population of the world spanning over 233 countries.

As the amount of information on the Web grows, that information becomes ever harder to keep track of and use. Compounding the matter is this information is spread over billions of Web pages, each with its own independent structure and format. So how do you find the information you're looking for in a useful format - and do it quickly and easily without breaking the bank?

Search Isn't Enough

Search engines are a big help, but they can do only part of the work, and they are hard-pressed to keep up with daily changes. For all the power of Google and its kin, all that search engines can do is locate information and point to it. They go only two or three levels deep into a Web site to find information and then return URLs. Search Engines cannot retrieve information from deep-web, information that is available only after filling in some sort of registration form and logging, and store it in a desirable format. In order to save the information in a desirable format or a particular application, after using the search engine to locate data, you still have to do the following tasks to capture the information you need:

· Scan the content until you find the information.

· Mark the information (usually by highlighting with a mouse).

· Switch to another application (such as a spreadsheet, database or word processor).

· Paste the information into that application.

Its not all copy and paste

Consider the scenario of a company is looking to build up an email marketing list of over 100,000 thousand names and email addresses from a public group. It will take up over 28 man-hours if the person manages to copy and paste the Name and Email in 1 second, translating to over $500 in wages only, not to mention the other costs associated with it. Time involved in copying a record is directly proportion to the number of fields of data that has to copy/pasted.

Is there any Alternative to copy-paste?

A better solution, especially for companies that are aiming to exploit a broad swath of data about markets or competitors available on the Internet, lies with usage of custom Web harvesting software and tools.

Web harvesting software automatically extracts information from the Web and picks up where search engines leave off, doing the work the search engine can't. Extraction tools automate the reading, the copying and pasting necessary to collect information for further use. The software mimics the human interaction with the website and gathers data in a manner as if the website is being browsed. Web Harvesting software only navigate the website to locate, filter and copy the required data at much higher speeds that is humanly possible. Advanced software even able to browse the website and gather data silently without leaving the footprints of access.

The next article of this series will give more details about how such softwares and uncover some myths on web harvesting.


Article Source: http://EzineArticles.com/expert/Thomas_Tuke/5484

Tuesday, 26 September 2017

Data Collection - Make a Plan

Planning for the data collection activity provides a stable and reliable data collection process in the Measure phase.

A well-planned activity ensures that your efforts and costs will not be in vain. Data collection typically involves three phases: pre-collection, collection and post-collection.

Pre-collection activities: Goal setting and forming operational definitions are some of the pre-collection activities that form the basis for systematic and precise data collection.

1.  Setting goals and objectives: Goal setting and defining objectives is the most important part of the pre-collection phase.

It enables teams to give direction to the data to be collected. The plan includes description of the Six Sigma project being planned. It lists out specific data that is required for the further steps in the process.

If there are no specific details as to the data needs, the data collection activity will not be within scope - and may become irrelevant over a period of time.

The plan must mention the rationale of data being collected as well as the final utilization.

2.  Define operational definitions: The team must clearly define what and how data has to be collected. An operational definition of scope, time interval and the number of observations required is very important.

If it mentions the methodology to be used, it can act a very important guideline to all data collection team members.

An understanding of all applicable information can help ensure that there no misleading data is collected, which may be loosely interpreted leading to a disastrous outcome.

3.  Repeatability, stability and accuracy of data: The repeatability of the data being collected is very important.

This means that when the same operator undertakes that same activity on a later date, it should produce the same output. Additionally, it is reproducible if all operators reach the same outcome.

Measurement systems should be accurate and stable, such that outcomes are the same with similar equipment over a period of time.

The team may carry out testing to ensure that there is no reduction in these factors.

Collection Activity

After planning and defining goals, the actual data collection process starts according to plan. Going by the plan ensures that teams achieve expected results consistently and accurately.

Training can be undertaken so as to ensure that all data collection agents have a common understanding of data being collected. Black Belts or team leaders can look over the process initially to provide any support needed.

For data collection over a longer period, teams need to ensure regular oversight to ensure that no collection activities are overlooked.

Post collection activities

Once collection activities are completed, the accuracy and reliability of the data has to be reviewed.

Source: http://ezinearticles.com/?Data-Collection---Make-a-Plan&id=2792515

Monday, 25 September 2017

How Web Crawling Can Help Venture Capital Firms

Venture capital firms are constantly on the lookout of innovative start-ups for investment. Whether you provide financial capital to early-stage start-ups in IT, software products, biotechnology or other booming industries, you will need the right information as soon as possible. In general, analysing media data to discover and validate insights is one of key areas in which analysts work. Hence, constantly monitoring popular media outlets is one of the ways VCs can deploy to spot trends. Read on to understand how web crawling can not only speed up this whole process but also improve the workflow and accuracy of insights.

What is web crawling

Web crawling simply refers to the use of automated computer programs to visit websites and extract specific bits of information. This is the same technology used by search engines to find, index and serve search results for user queries. Web crawling, as you’d have guessed is a technical and niche process. It takes skilled programmers to write programs that can navigate through the web to find and extract the needed data.

There are DIY tools, vertical specific data providers and DaaS (Data as a service) solutions that VC firms can deploy for crawling.  Although there is the option of setting up an in-house crawling setup, this isn’t recommended for Venture Capital firms. The high tech-barrier and complexity of web crawling process can lead to loss of focus for the VC firms. DaaS can be the ideal option as it’s suitable for recurring and large-scale requirements which only a hosted solution can offer.

How web crawling can help Venture Capital firms

Crawling start-up and entrepreneurship blogs using a web crawling service can help VC firms avail the much-needed data that they can use to discover new trends and validate their research. This can complement the existing research process and make it much more efficient.

1. Spot trends

Spotting new trends in the market is extremely important for venture capital firms. This helps identify the niches that have high probability of bringing in profit. Since investing in companies that have higher chances of succeeding is what Venture capital firms do, the ability to spot trends becomes an invaluable tool.

Web crawling can harvest enough data to identify trends in the market. Websites like Techcrunch and Venturebeat are great sources of start-up related news and information. Media sites like these talk about trending topics constantly. To spot trends in the market, you could use a web crawling solution to extract the article title, date and URL for the current time period and run this data through an analytics solution to identify the most used words in the article titles and URLs. Venture capital firms can then use these insights to target newer companies in the trending niches. Technology blogs, forums and communities can be great places to find relevant start-ups.

2. Validate findings

The manual research by the analysts needs to be validated before the firm can go ahead with further proceedings. Validation can be done by comparing the results of the manual work with the relevant data extracted using web crawling. This not only makes validation much easier but also helps in the weeding out process, thus reducing the possibilities of making mistakes. This can be partially automated by using intelligent data processing/visualisation tools on top the data.

3. Save time

Machines are much faster than humans. Employing web crawling to assist in the research processes in a venture capital firm can save the analysts a lot of time and effort. This time can be further invested in more productive activities like analytics, deep research and evaluation.

Source:-https://www.promptcloud.com/blog/web-crawling-for-venture-capital-firms

Saturday, 22 July 2017

Scraping Dynamic Websites: How We Tackle the Problem

Scraping Dynamic Websites: How We Tackle the Problem

Acquiring data from the web for business applications has already gained popularity if we look at the sheer number of use cases. Companies have realized the value addition provided by data and are looking for better and efficient ways of data extraction. However, web scraping is a niche technical process that takes years to master given the dynamic nature of the web. Since every website is different and custom coded, it’s not possible to write a single program that can handle multiple websites. The web scraping setup should be coded separately for each target site and this needs a team of skilled programmers.

Web scraping is without doubt a complex trade; however if the target site in question employs dynamic coding practices, this complexity is further multiplied. Over the years, we have understood the technical nuances of web scraping and perfected our modus operandi to to scrape dynamic websites with high accuracy and efficiency. Here are some ways how we tackle the challenge of scraping dynamic websites.

1. Proxies

Some websites have different Geo/Device/OS/browser specific versions that they serve depending on the variables. This could give a great deal of confusion to the crawlers especially while figuring out how to extract the right version. This will need some manual work in terms of finding the different versions provided by the site and configuring proxies to fetch the right version as per the requirement. For geo-specific versions, the crawler is simply deployed on a server from where the required version of the site is accessible.

2. Browser automation

When it comes to websites that use very complex and dynamic code, it’s better have all the page content rendered using a browser first. Selenium can be used for browser automation which will help us do the scraping. It is essentially a handy toolkit that can drive the browser from your favorite programming language. Although it’s primarily used for testing, it can be used for scraping dynamic web pages. It can be used to control a web browser, which is how scraping using selenium is typically done. In this case, the browser first renders the page which will help overcome the problem of reverse engineering JavaScript code to fetch the page content. Once the page content is rendered, it is saved locally to scrape the required data points later. Although this is comparatively easy, there is a high chance of encountering errors while scraping using the browser automation method.

3. Handling POST requests

Many web pages will only display the data that we need after receiving a certain input from the user. Let’s say you are looking for used cars data from a particular geo-location on a classified site. The website would first require you to enter the ZIP code of the location from where you need listings from. This ZIP code must be sent to the website as a post request while scraping. We craft the post request using the appropriate parameters so as to reach the target page that contains all the data points to be scraped.

4. Manufacturing the JSON URL

There are dynamic web pages that use AJAX calls to load and refresh the page content. These are particularly difficult to scrape and extract data from as the triggers that make up the JSON file is difficult to trace. This requires a lot of manual inspection and testing, but once the appropriate parameters are identified, a JSON file that would fetch the target page which includes the desired data points can be manufactured. This JSON file is often tweaked automatically for navigation or fetching varying data points. Manufacturing the JSON URL with apt parameters is the primary pain point with web pages that use AJAX calls.
Bottom-line

Scraping dynamic web pages is extremely complicated and demands deep expertise in the field of web scraping. It also demands an extensive tech stack and well-built infrastructure that can handle the complexities associated with web data extraction. With our years of expertise and well-evolved web scraping infrastructure, we cater to data requirements where dynamic web pages are involved on a daily basis.

Source:https://www.promptcloud.com/blog/scraping-dynamic-websites-web-scraping

Friday, 30 June 2017

7 Best Web Scraping Software tools to Acquire Data Without Coding

Ever since the world wide web started growing in terms of data size and quality, businesses and data enthusiasts have been looking for methods to extract this data from the web. Today, there are various ways to acquire data from websites of your preference. Some are meant for hobbyists and some are suitable for enterprises. DIY web scraping software belong the former category. If you need data from a few websites of your choice for a quick research or project, these tools are more than enough. DIY web scraping tools are much easier to use in comparison to programming your own web scraping setup. Here are some of the best web scraping software available in the market right now.

best web scraping software

1. Outwit Hub
Outwit hub is a Firefox extension that can be easily downloaded from the Firefox add-ons store. Once installed and activated, it gives web scraping capabilities to your browser. Out of the box, it has data points recognition features that can make your scraping job easier. Extracting data from sites using Outwit hub doesn’t demand programming skills. The set up is fairly easy to learn. You can refer to our guide on using Outwit hub to get started with web scraping using the tool. As it is free of cost, it makes for a great option if you need to scrape some data from the web quickly.

2. Web Scraper Chrome Extension
Web scraper is a great alternative to Outwit hub which is available for Google Chrome that can be used for web scraping. It lets you set up a sitemap (plan) on how a website should be navigated and what data should to be extracted. It can scrape multiple pages simultaneously and even has dynamic data extraction capabilities. Web scraper can also handle pages with JavaScript and Ajax, which makes it all the more powerful. The tool lets you export the extracted data to a CSV file. The only downside to web scraper extension is that it doesn’t have many automation features built in. Learn how to use web scraper to extract data from the web.

3. Spinn3r
Spinn3r is a great choice for scraping entire data from blogs, news sites, social media and RSS feeds. Spinn3r uses firehose API that manages 95% of the crawling and indexing work. It gives you the option to filter the data that it scrapes using keywords, which helps in weeding out irrelevant content. The indexing system of Spinn3r is similar to Google and saves the extracted data in JSON format. Spinn3r works by continuously scanning the web and updating their data sets. It has an admin console packed with features that lets you perform searches on the raw data. Spinn3r is an ideal solution if your data requirements are limited to media websites.

4. Fminer
Fminer is one of the easiest to use web scraping tools out there that combines top-in-class features. Its visual dashboard makes extracting data from websites as simple and intuitive as possible. Whether you want to scrape data from simple web pages or carry out complex data fetching projects that require proxy server lists, ajax handling and multi-layered crawls, Fminer can do it all. If your web scraping project is fairly complex, Fminer is the software you need.

5. Dexi.io
Dexi.io is a web based scraping application that doesn’t require any download. It is a browser based tool that lets you set up crawlers and fetch data in real-time. Dexi.io also has features that will let you save the scraped data directly to Box.net and Google drive or export it as JSON or CSV files. It also supports scraping the data anonymously using proxy servers. The data you scrape will be hosted on their servers for up to 2 weeks before it’s archived.

6. ParseHub
Parsehub is a web scraping software that supports complicated data extraction from sites that use AJAX, JavaScript, redirects and cookies. It is equipped with machine learning technology that can read and analyse documents on the web to output relevant data. Parsehub is available as a desktop client for windows, mac and linux and there is also a web app that you can use within the browser. You can have up to 5 crawl projects with the free plan from Parsehub.


7. Octoparse
Octoparse is a visual web scraping tool that is easy to configure. The point and click user interface lets you teach the scraper how to navigate and extract fields from a website. The software mimics a human user while visiting and scraping data from target websites. Octoparse gives the option to run your extraction on the cloud and on your own local machine. You can export the scraped data in TXT, CSV, HTML or Excel formats.

Tools vs Hosted Services
Although web scraping tools can handle simple to moderate data extraction requirements, these are not a recommended solution if you are a business trying to acquire data for competitive intelligence or market research. When the requirement is large-scale and/or complicated, web scraping tools fail to live up to the expectations. DIY tools can be the right choice if your data requirements are limited and the sites you are looking to scrape are not complicated. If you need enterprise-grade data, outsourcing the requirement to a DaaS (Data-as-a-Service) provider would be the ideal option. Dedicated web scraping services will take care of end-to-end data acquisition and will deliver the required data, the way you need it.

If your data requirement demands a custom built set up, a DIY tool cannot cover it. For example, if you need product data of the best selling products from Amazon at a predefined frequency, you will have to consult a web scraping provider instead of using a software. With a software, the customization options are limited and automation is almost non-existent. Tools also come with the downside of maintenance, which can be a daunting task. A scraping service provider will set up monitoring for the target websites and make sure that the scraping setup is well maintained. The flow of data will be smooth and consistent with a hosted solution.

Source url :-https://www.promptcloud.com/blog/best-web-scraping-software-tools-extract-data

Thursday, 22 June 2017

Data Scraping Doesn’t Have to Be Hard

All You Need Is the Right Data Scraping Partner

Odds are your business needs web data scraping. Data scraping is the act of using software to harvest desired data from target websites. So, instead of you spending every second scouring the internet and copying and pasting from the screen, the software (called “spiders”) does it for you, saving you precious time and resources.

Departments across an organization will profit from data scraping practices.

Data scraping will save countless hours and headaches by doing the following:

- Monitoring competitors’ prices, locations and service offerings
- Harvesting directory and list data from the web, significantly improving your lead generation
- Acquiring customer and product marketing insight from forums, blogs and review sites
- Extracting website data for research and competitive analysis
- Social media scraping for trend and customer analysis
- Collecting regular or even real time updates of exchange rates, insurance rates, interest rates, -mortgage rates, real estate, stock prices and travel prices

It is a no-brainer, really. Businesses of all sizes are integrating data scraping into their business initiatives. Make sure you stay ahead of the competition by effectively data scraping.

Now for the hard part

The “why should you data scrape?” is the easy part. The “how” gets a bit more difficult. Are you savvy in Python and HTML? What about JavaScript and AJAX? Do you know how to utilize a proxy server? As your data collection grows, do you have the cloud-based infrastructure in place to handle the load? If you or someone at your organization can answer yes to these questions, do they have the time to take on all the web data scraping tasks? More importantly, is it a cost-effective use of your valuable staffing resources for them to do this? With constantly changing websites, resulting in broken code and websites automatically blacklisting your attempts, it could be more of a resource drain than anticipated.

Instead of focusing on all the issues above, business users should be concerned with essential questions such as:

- What data do I need to grow my business?
- Can I get the data I need, when I want it and in a format I can use?
- Can the data be easily stored for future analysis?
- Can I maximize my staffing resources and get this data without any programming knowledge or IT assistance?
- Can I start now?
- Can I cost-effectively collect the data needed to grow my business?

A web data scraping partner is standing by to help you!

This is where purchasing innovative web scraping services can be a game changer. The right partner can harness the value of the web for you. They will go into the weeds so you can spend your precious time growing your business.

Hold on a second! Before you run off to purchase data scraping services, you need to make sure you are looking for the solution that best fits your organisational needs. Don’t get overwhelmed. We know that relinquishing control of a critical business asset can be a little nerve-wracking. To help, we have come up with our steps and best practices for choosing the right data scraping company for your organisation.

1) Know Your Priorities

We have brought this up before, but when going through a purchasing decision process we like to turn to Project Management 101: The Project Management Triangle. For this example, we think a Euler diagram version of the triangle fits best.
Data Scraping and the Project Management Triangle

In this example, the constraints show up as Fast (time), Good (quality) and Cheap (cost). This diagram displays the interconnection of all three elements of the project. When using this diagram, you are only able to pick two priorities. Only two elements may change at the expense of the third:

- We can do the project quickly with high quality, but it will be costly
- We can do the project quickly at a reduced cost, but quality will suffer
- We can do a high-quality project at a reduced cost, but it will take much longer
Using this framework can help you shape your priorities and budget. This really, in turn, helps you search for and negotiate with a data scraping company.

2) Know your budget/resources.

This one is so important it is on here twice. Knowing your budget and staffing resources before reaching out to data scraping companies is key. This will make your search much more efficient and help you manage the entire process.

3) Have a plan going in.

Once again, you should know your priorities, budget, business objectives and have a high-level data scraping plan before choosing a data scraping company. Here are a few plan guidelines to get you started:

- Know what data points to collect: contact information, demographics, prices, dates, etc.
- Determine where the data points can most likely be found on the internet: your social media and review sites, your competitors’ sites, chambers of commerce and government sites, e-commerce sites your products/competitors’ products are sold, etc.
- What frequency do you need this data and what is the best way to receive it? Make sure you can get the data you need and in the correct format. Determine whether you can perform a full upload each time or just the changes from the previous dataset. Think about whether you want the data delivered via email, direct download or automatically to your Amazon S3 account.
- Who should have access to the data and how will it be stored once it is harvested?
- Finally, the plan should include what you are going to do with all this newly acquired data and who is receiving the final analysis.

4) Be willing to change your plan.

This one may seem counterintuitive after so much focus on having a game plan. However, remember to be flexible. The whole point of hiring experts is that they are the experts. A plan will make discussions much more productive, but the experts will probably offer insight you hadn’t thought of. Be willing to integrate their advice into your plan.

5) Have a list of questions ready for the company.

Having a list of questions ready for the data scraping company will help keep you in charge of the discussions and negotiations. Here are some points that you should know before choosing a data scraping partner:
- Can they start helping you immediately? Make sure they have the infrastructure and staff to get - you off the ground in a matter of weeks, not months.
- Make sure you can access them via email and phone. Also make sure you have access to those -actually performing the data scraping, not just a call center.
- Can they tailor their processes to fit with your requirements and organisational systems?
- Can they scrape more than plain text? Make sure they can harvest complex and dynamic sites -with JavaScript and AJAX. If a website’s content can be viewed on a browser, they should be-- able to get it for you.
- Make sure they have monitoring systems in place that can detect changes, breakdowns, and -quality issues. This will ensure you have access to a persistent and reliable flow of data, even - when the targeted websites change formats.
- As your data grows, can they easily keep up? Make sure they have scalable solutions that could - handle all that unstructured web data.
- Will they protect your company? Make sure they know discretion is important and that they will not advertise you as a client unless you give permission. Also, check to see how they disguise their scrapers so that the data harvesting cannot be traced back to your business.

6) Check their reviews.

Do a bit of your own manual data scraping to see what others business are saying about the companies you are researching.

7) Make sure the plan the company offers is cost-effective.

Here are a few questions to ask to make sure you get a full view of the costs and fees in the estimate:
- Is there a setup fee?
- What are the fixed costs associated with this project?
- What are the variable costs and how are they calculated?
- Are there any other taxes, fees or things that I could be charged for that are not listed on this -quote?
- What are the payment terms?

Source Url :-http://www.data-scraping.com.au/data-scraping-doesnt-have-to-be-hard/

Saturday, 17 June 2017

How We Maintain Data Quality While Handling Large Scale Extraction

How We Maintain Data Quality While Handling Large Scale Extraction

The demand for high quality data is increasing along with the rise in products and services that require data to run. Although the information available on the web is increasing in terms of quantity and quality, extracting it in a clean, usable format remains challenging to most businesses. Having been in the web data extraction business for long enough, we have come to identify the best practices and tactics that would ensure high quality data from the web.

At PromptCloud, we not only make sure data is accessible to everyone, we make sure it’s of high quality, clean and delivered in a structured format. Here is how we maintain the quality while handling zettabytes of data for hundreds of clients from across the world.

Manual QA process

1. Crawler review

Every web data extraction project starts with the crawler setup. Here, the quality of the crawler code and its stability is of high priority as this will have a direct impact on the data quality. The crawlers are programmed by our tech team members who have high technical acumen and experience. Once the crawler is made, two peers review the code to make sure that the optimal approach is used for extraction and to ensure there are no inherent issues with the code. Once this is done, the crawler is deployed on our dedicated servers.

2. Data review

The initial set of data starts coming in when the crawler is run for the first time. This data is manually inspected, first by the tech team and then by one of our business representatives before the setup is finalized. This manual layer of quality check is thorough and weeds out any possible issues with the crawler or the interaction between the crawler and website. If issues are found, the crawler is tweaked to eliminate them completely before the setup is marked complete.

Automated monitoring

Websites get updated over time, quite frequently than you’d imagine. Some of these changes could break the crawler or cause it to start extracting the wrong data. This is why we have developed a fully automated monitoring system to watch over all the crawling jobs happening on our servers. This monitoring system continuously checks the incoming data for inconsistencies and errors. There are three types of issues it can look for:

1. Data validation errors

Every data point has a defined value type. For example, the data point ‘Price’ will always have a numerical value and not text. In cases of website changes, there can be class name mismatches that might cause the crawler to extract wrong data for a certain field. The monitoring system will check if all the data points are in line with their respective value types. If an inconsistency is found, the system immediately sends out a notification to the team members handling that project and the issue is fixed promptly.

2. Volume based inconsistencies

There can be cases where the volume count for records significantly drop or increase in an irregular fashion. This is a red sign as far as web crawling goes. The monitoring system will already have the expected record count for different projects. If inconsistencies are spotted in the data volumes, the system sends out a prompt notification.

3. Site changes

Structural changes happening to the target websites is the main reason why crawlers break. This is monitored by our dedicated monitoring system, quite aggressively. The tool performs frequent checks on the target site to make sure nothing has changed since the previous crawl. If changes are found, it sends out notifications for the same.
High end servers

It is understood that web crawling is a resource-intensive process that needs high performance servers. The quality of servers will determine how smooth the crawling happens and this in turn has an impact on the quality of data. Having firsthand experience in this, we use high-end servers to deploy and run our crawlers. This helps us avoid instances where crawlers fail due to the heavy load on servers.

Data cleansing

The initially crawled data might have unnecessary elements like HTML tags. In that sense, this data can be called crude. Our cleansing system does an exceptionally good job at eliminating these elements and cleaning up the data thoroughly. The output is clean data without any of the unwanted elements.

Structuring

Structuring is what makes the data compatible with databases and analytics systems by giving it a proper, machine readable syntax. This is the final process before delivering the data to the clients. With structuring done, the data is ready to be consumed either by importing it to a database or plugging to an analytics system. We deliver the data in multiple formats – XML, JSON and CSV which also adds to the convenience of handling it.

Source:https://www.promptcloud.com/blog/how-we-maintain-data-quality-web-data-extraction