Tuesday 30 August 2016

Why Healthcare Companies should look towards Web Scraping

Why Healthcare Companies should look towards Web Scraping

The internet is a massive storehouse of information which is available in the form of text, media and other formats. To be competitive in this modern world, most businesses need access to this storehouse of information. But, all this information is not freely accessible as several websites do not allow you to save the data. This is where the process of Web Scraping comes in handy.

Web scraping is not new—it has been widely used by financial organizations, for detecting fraud; by marketers, for marketing and cross-selling; and by manufacturers for maintenance scheduling and quality control. Web scraping has endless uses for business and personal users. Every business or individual can have his or her own particular need for collecting data. You might want to access data belonging to a particular category from several websites. The different websites belonging to the particular category display information in non-uniform formats. Even if you are surfing a single website, you may not be able to access all the data at one place.

The data may be distributed across multiple pages under various heads. In a market that is vast and evolving rapidly, strategic decision-making demands accurate and thorough data to be analyzed, and on a periodic basis. The process of web scraping can help you mine data from several websites and store it in a single place so that it becomes convenient for you to a alyze the data and deliver results.

In the context of healthcare, web scraping is gaining foothold gradually but qualitatively. Several factors have led to the use of web scraping in healthcare. The voluminous amount of data produced by healthcare industry is too complex to be analyzed by traditional techniques. Web scraping along with data extraction can improve decision-making by determining trends and patterns in huge amounts of intricate data. Such intensive analyses are becoming progressively vital owing to financial pressures that have increased the need for healthcare organizations to arrive at conclusions based on the analysis of financial and clinical data. Furthermore, increasing cases of medical insurance fraud and abuse are encouraging healthcare insurers to resort to web scraping and data extraction techniques.

Healthcare is no longer a sector relying solely on person to person interaction. Healthcare has gone digital in its own way and different stakeholders of this industry such as doctors, nurses, patients and pharmacists are upping their ante technologically to remain in sync with the changing times. In the existing setup, where all choices are data-centric, web scraping in healthcare can impact lives, educate people, and create awareness. As people no more depend only on doctors and pharmacists, web scraping in healthcare can improve lives by offering rational solutions.

To be successful in the healthcare sector, it is important to come up with ways to gather and present information in innovative and informative ways to patients and customers. Web scraping offers a plethora of solutions for the healthcare industry. With web scraping and data extraction solutions, healthcare companies can monitor and gather information as well as track how their healthcare product is being received, used and implemented in different locales. It offers a safer and comprehensive access to data allowing healthcare experts to take the right decisions which ultimately lead to better clinical experience for the patients.

Web scraping not only gives healthcare professionals access to enterprise-wide information but also simplifies the process of data conversion for predictive analysis and reports. Analyzing user reviews in terms of precautions and symptoms for diseases that are incurable till date and are still undergoing medical research for effective treatments, can mitigate the fear in people. Data analysis can be based on data available with patients and is one way of creating awareness among people.

Hence, web scraping can increase the significance of data collection and help doctors make sense of the raw data. With web scraping and data extraction techniques, healthcare insurers can reduce the attempts of frauds, healthcare organizations can focus on better customer relationship management decisions, doctors can identify effective cure and best practices, and patients can get more affordable and better healthcare services.

Web scraping applications in healthcare can have remarkable utility and potential. However, the triumph of web scraping and data extraction techniques in healthcare sector depends on the accessibility to clean healthcare data. For this, it is imperative that the healthcare industry think about how data can be better recorded, stored, primed, and scraped. For instance, healthcare sector can consider standardizing clinical vocabulary and allow sharing of data across organizations to heighten the benefits from healthcare web scraping practices.

Healthcare sector is one of the top sectors where data is multiplying exponentially with time and requires a planned and structured storage of data. Continuous web scraping and data extraction is necessary to gain useful insights for renewing health insurance policies periodically as well as offer affordable and better public health solutions. Web scraping and data extraction together can process the mammoth mounds of healthcare data and transform it into information useful for decision making.

To reduce the gap between various components of healthcare sector-patients, doctors, pharmacies and hospitals, healthcare organizations and websites will have to tap the technology to collect data in all formats and present in a usable form. The healthcare sector needs to overcome the lag in implementing effective web scraping and data extraction techniques as well as intensify their pace of technology adoption. Web scraping can contribute enormously to the healthcare industry and facilitate organizations to methodically collect data and process it to identify inadequacies and best practices that improve patient care and reduce costs.

Source: https://www.promptcloud.com/blog/why-health-care-companies-should-use-web-scraping

Monday 22 August 2016

ERP Data Conversions - Best Practices and Steps

ERP Data Conversions - Best Practices and Steps

Every company who has gone through an ERP project has gone through the painful process of getting the data ready for the new system. The process of executing this typically goes through the following steps:

(1) Extract or define

(2) Clean and transform

(3) Load

(4) Validate and verify

This process is typically executed multiple times (2 - 5+ times depending on complexity) through an ERP project to ensure that the good data ends up in the new system. If the data is either incorrect, not well enough cleaned or adjusted or loaded incorrectly in to the new system it can cause serious problems as the new system is launched.

(1) Extract or define

This involves extracting the data from legacy systems, which are to be decommissioned. In some cases the data may not exist in a legacy system, as the old process may be spreadsheet-based and has to be created from scratch. Typically this involves creating some extraction programs or leveraging existing reports to get the data in to a format which can be put in to a spreadsheet or a data management application.

(2) Data cleansing

Once extracted it normally reviewed is for accuracy by the business, supported by the IT team, and/or adjusted if incorrect or in a structure which the new ERP system does not understand. Depending on the level of change and data quality this can represent a significant effort involving many business stakeholders and required to go through multiple cycles.

(3) Load data to new system

As the data gets structured to a format which the receiving ERP system can handle the load programs may also be build to handle certain changes as part of the process of getting the data converted in to the new system. Data is loaded in to interface tables and loaded in to the new system's core master data and transactions tables.

When loading the data in to the new system the inter-dependency of the different data elements is key to consider and validate the cross dependencies. Exceptions are dealt with and go in to lessons learned and to modify extracts, data cleansing or load process in to the next cycle.

(4) Validate and verify

The final phase of the data conversion process is to verify the converted data through extracts, reports or manually to ensure that all the data went in correctly. This may also include both internal and external audit groups and all the key data owners. Part of the testing will also include attempting to transact using the converted data successfully.

The topmost success factors or best practices to execute a successful conversion I would prioritize as follows:

(1) Start the data conversion early enough by assessing the quality of the data. Starting too late can result in either costly project delays or decisions to load garbage and "deal with it later" resulting in an increase in problems as the new system is launched.

(2) Identify and assign data owners and customers (often forgotten) for the different elements. Ensure that not only the data owners sign-off on the data conversions but that also the key users of the data are involved in reviewing the selection criteria's, data cleansing process and load verification.

(3) Run sufficient enough rounds of testing of the data, including not only validating the loads but also transacting with the converted data.

(4) Depending on the complexity, evaluate possible tools beyond spreadsheets and custom programming to help with the data conversion process for cleansing, transformation and load process.

(5) Don't under-estimate the effort in cleansing and validating the converted data.

(6) Define processes and consider other tools to help how the accuracy of the data will be maintained after the system goes live.

Source: http://ezinearticles.com/?ERP-Data-Conversions---Best-Practices-and-Steps&id=7263314

Wednesday 10 August 2016

Difference between Data Mining and KDD

Difference between Data Mining and KDD

Data, in its raw form, is just a collection of things, where little information might be derived. Together with the development of information discovery methods(Data Mining and KDD), the value of the info is significantly improved.

Data mining is one among the steps of Knowledge Discovery in Databases(KDD) as can be shown by the image below.KDD is a multi-step process that encourages the conversion of data to useful information. Data mining is the pattern extraction phase of KDD. Data mining can take on several types, the option influenced by the desired outcomes.

Knowledge Discovery in Databases Steps
Data Selection

KDD isn’t prepared without human interaction. The choice of subset and the data set requires knowledge of the domain from which the data is to be taken. Removing non-related information elements from the dataset reduces the search space during the data mining phase of KDD. The sample size and structure are established during this point, if the dataset can be assessed employing a testing of the info.
Pre-processing

Databases do contain incorrect or missing data. During the pre-processing phase, the information is cleaned. This warrants the removal of “outliers”, if appropriate; choosing approaches for handling missing data fields; accounting for time sequence information, and applicable normalization of data.
Transformation

Within the transformation phase attempts to reduce the variety of data elements can be assessed while preserving the quality of the info. During this stage, information is organized, changed in one type to some other (i.e. changing nominal to numeric) and new or “derived” attributes are defined.
Data mining

Now the info is subjected to one or several data-mining methods such as regression, group, or clustering. The information mining part of KDD usually requires repeated iterative application of particular data mining methods. Different data-mining techniques or models can be used depending on the expected outcome.
Evaluation

The final step is documentation and interpretation of the outcomes from the previous steps. Steps during this period might consist of returning to a previous step up the KDD approach to help refine the acquired knowledge, or converting the knowledge in to a form clear for the user.In this stage the extracted data patterns are visualized for further reviews.
Conclusion

Data mining is a very crucial step of the KDD process.

For further reading aboud KDD and data mining ,please check this link.

Source: http://nocodewebscraping.com/difference-data-mining-kdd/

Thursday 4 August 2016

Data Discovery vs. Data Extraction

Data Discovery vs. Data Extraction

Looking at screen-scraping at a simplified level, there are two primary stages involved: data discovery and data extraction. Data discovery deals with navigating a web site to arrive at the pages containing the data you want, and data extraction deals with actually pulling that data off of those pages. Generally when people think of screen-scraping they focus on the data extraction portion of the process, but my experience has been that data discovery is often the more difficult of the two.

The data discovery step in screen-scraping might be as simple as requesting a single URL. For example, you might just need to go to the home page of a site and extract out the latest news headlines. On the other side of the spectrum, data discovery may involve logging in to a web site, traversing a series of pages in order to get needed cookies, submitting a POST request on a search form, traversing through search results pages, and finally following all of the "details" links within the search results pages to get to the data you're actually after. In cases of the former a simple Perl script would often work just fine. For anything much more complex than that, though, a commercial screen-scraping tool can be an incredible time-saver. Especially for sites that require logging in, writing code to handle screen-scraping can be a nightmare when it comes to dealing with cookies and such.

In the data extraction phase you've already arrived at the page containing the data you're interested in, and you now need to pull it out of the HTML. Traditionally this has typically involved creating a series of regular expressions that match the pieces of the page you want (e.g., URL's and link titles). Regular expressions can be a bit complex to deal with, so most screen-scraping applications will hide these details from you, even though they may use regular expressions behind the scenes.

As an addendum, I should probably mention a third phase that is often ignored, and that is, what do you do with the data once you've extracted it? Common examples include writing the data to a CSV or XML file, or saving it to a database. In the case of a live web site you might even scrape the information and display it in the user's web browser in real-time. When shopping around for a screen-scraping tool you should make sure that it gives you the flexibility you need to work with the data once it's been extracted.

Source: http://ezinearticles.com/?Data-Discovery-vs.-Data-Extraction&id=165396

Monday 1 August 2016

Scraping LinkedIn Public Profiles for Fun and Profit

Scraping LinkedIn Public Profiles for Fun and Profit

Reconnaissance and Information Gathering is a part of almost every penetration testing engagement. Often, the tester will only perform network reconnaissance in an attempt to disclose and learn the company's network infrastructure (i.e. IP addresses, domain names, and etc), but there are other types of reconnaissance to conduct, and no, I'm not talking about dumpster diving. Thanks to social networks like LinkedIn, OSINT/WEBINT is now yielding more information. This information can then be used to help the tester test anything from social engineering to weak passwords.

In this blog post I will show you how to use Pythonect to easily generate potential passwords from LinkedIn public profiles. If you haven't heard about Pythonect yet, it is a new, experimental, general-purpose dataflow programming language based on the Python programming language. Pythonect is most suitable for creating applications that are themselves focused on the "flow" of the data. An application that generates passwords from the employees public LinkedIn profiles of a given company - have a coherence and clear dataflow:

(1) Find all the employees public LinkedIn profiles → (2) Scrap all the employees public LinkedIn profiles → (3) Crunch all the data into potential passwords

Now that we have the general concept and high-level overview out of the way, let's dive in to the details.

Finding all the employees public LinkedIn profiles will be done via Google Custom Search Engine, a free service by Google that allows anyone to create their own search engine by themselves. The idea is to create a search engine that when searching for a given company name - will return all the employees public LinkedIn profiles. How? When creating a Google Custom Search Engine it's possible to refine the search results to a specific site (i.e. 'Sites to search'), and we're going to limit ours to: linkedin.com. It's also possible to fine-tune the search results even further, e.g. uk.linkedin.com to find only employees from United Kingdom.

The access to the newly created Google Custom Search Engine will be made using a free API key obtained from Google API Console. Why go through the Google API? because it allows automation (No CAPTCHA's), and it also means that the search-result pages will be returned as JSON (as oppose to HTML). The only catch with using the free API key is that it's limited to 100 queries per day, but it's possible to buy an API key that will not be limited.

Scraping the profiles is a matter of iterating all over the hCards in all the search-result pages, and extracting the employee name from each hCard. Whats is a hCard? hCard is a micro format for publishing the contact details of people, companies, organizations, and places. hCard is also supported by social networks such as Facebook, Google+, LinkedIn and etc. for exporting public profiles. Google (when indexing) parses hCard, and when relevant, uses them in search-result pages. In other words, when search-result pages include LinkedIn public profiles, it will appear as hCards, and could be easily parsed.

Let's see the implementation of the above:

#!/usr/bin/python
#
# Copyright (C) 2012 Itzik Kotler
#
# scraper.py is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# scraper.py is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with scraper.py.  If not, see <http://www.gnu.org/licenses/>.

"""Simple LinkedIn public profiles scraper that uses Google Custom Search"""

import urllib
import simplejson


BASE_URL = "https://www.googleapis.com/customsearch/v1?key=<YOUR GOOGLE API KEY>&cx=<YOUR GOOGLE SEARCH ENGINE CX>"


def __get_all_hcards_from_query(query, index=0, hcards={}):

    url = query

    if index != 0:

        url = url + '&start=%d' % (index)

    json = simplejson.loads(urllib.urlopen(url).read())

    if json.has_key('error'):

        print "Stopping at %s due to Error!" % (url)

        print json

    else:

        for item in json['items']:

            try:

                hcards[item['pagemap']['hcard'][0]['fn']] = item['pagemap']['hcard'][0]['title']

            except KeyError as e:

                pass

        if json['queries'].has_key('nextPage'):

            return __get_all_hcards_from_query(query, json['queries']['nextPage'][0]['startIndex'], hcards)

    return hcards


def get_all_employees_by_company_via_linkedin(company):

    queries = ['"at %s" inurl:"in"', '"at %s" inurl:"pub"']

    result = {}

    for query in queries:

        _query = query % company

        result.update(__get_all_hcards_from_query(BASE_URL + '&q=' + _query))

    return list(result)

Replace <YOUR GOOGLE API KEY> and <YOUR GOOGLE SEARCH ENGINE CX> in the code above with your Google API Key and Google Search Engine CX respectively, save it to a file called scraper.py, and you're ready!

To kick-start, here is a simple program in Pythonect (that utilizes the scraper module) that searchs and prints all the Pythonect company employees full names:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> print

The output should be:

Itzik Kotler

In my LinkedIn Profile, I have listed Pythonect as a company that I work for, and since no one else is working there, when searching for all the employees of Pythonect company - only my LinkedIn profile comes up.
For demonstration purposes I will keep using this example (i.e. "Pythonect" company, and "Itzik Kotler" employee), but go ahead and replace Pythonect with other, more popular, companies names and see the results.

Now that we have a working skeleton, let's take its output and start crunching it. Keep in mind that every "password generation forumla" is merely a guess. The examples below are only a sampling of what can be done. There are, obviously many more possibilities and you are encouraged to experiment. But first, let's normalize the output - this way it's going to be consistent before operations are performed on it:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> string.lower(''.join(_.split()))

The normalization procedure is short and simple: convert the string to lowercase and remove any spaces, and so the output should be now:

itzikkotler

As for data manipulation, out of the box (Thanks to The Python Standard Library) we've got itertools and it's combinatoric generators. Let's start by applying itertools.product:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin -> string.lower(''.join(_.split())) -> itertools.product(_, repeat=4) -> print

The code above will generate and print every 4 characters password from the letters: i, t, z, k, o, t, l , e, r. However, it won't cover passwords with uppercase letters in it. And so, here's a simple and straightforward implementation of a cycle_uppercase function that cycles the input letters yields a copy of the input with letter in uppercase:

def cycle_uppercase(i):
    s = ''.join(i)
    for idx in xrange(0, len(s)):
        yield s[:idx] + s[idx].upper() + s[idx+1:]

To use it, save it to a file called itertools2.py, and then simply add it to the Pythonect program after the itertools.product(_, repeat=4) block, as follows:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin \
    -> string.lower(''.join(_.split())) \
        -> itertools.product(_, repeat=4) \
            -> itertools2.cycle_uppercase \
                -> print

Now, the program will also cover passwords that include a single uppercase letter in it. Moving on with the data manipulation, sometimes the password might contain symbols that are not found within the scrapped data. In this case, it is necessary to build a generator that will take the input and add symbols to it. Here is a short and simple generator implemented as a Generator Expression:

[_ + postfix for postfix in ['123','!','$']]

To use it, simply add it to the Pythonect program after the itertools2.cycle_uppercase block, as follows:

"Pythonect" -> scraper.get_all_employees_by_company_via_linkedin \
    -> string.lower(''.join(_.split())) \
        -> itertools.product(_, repeat=4) \
            -> itertools2.cycle_uppercase \
                -> [_ + postfix for postfix in ['123','!','$']] \
                    -> print

The result is that now the program adds the strings: '123', '!', and '$' to every generated password, which increases the chances of guessing the user's right password, or not, depends on the password :)

To summarize, it's possible to take OSINT/WEBINT data on a given person or company and use it to generate potential passwords, and it's easy to do with Pythonect. There are, of course, many different ways to manipulate the data into passwords and many programs and filters that can be used. In this aspect, Pythonect being a flow-oriented language makes it easy to experiment and research with different modules and programs in a "plug and play" manner.

Source:http://blog.ikotler.org/2012/12/scraping-linkedin-public-profiles-for.html