ai驱动数据安全治理
Data gathering consists of many time-consuming and complex activities. These include proxy management, data parsing, infrastructure management, overcoming fingerprinting anti-measures, rendering JavaScript-heavy websites at scale, and much more. Is there a way to automate these processes? Absolutely.
数据收集包括许多耗时且复杂的活动。 这些措施包括代理管理,数据解析,基础结构管理,克服指纹防措施,大规模渲染JavaScript繁重的网站等。 有没有办法使这些过程自动化? 绝对。
Finding a more manageable solution for a large-scale data gathering has been on the minds of many in the web scraping community. Specialists saw a lot of potential in applying AI (Artificial Intelligence) and ML (Machine Learning) to web scraping. However, only recently, actions toward data gathering automation using AI applications have been taken. This is no wonder, as AI and ML algorithms became more robust at large-scale only in recent years together with advancement in computing solutions.
网络抓取社区中的许多人一直在寻找为大规模数据收集提供更易管理的解决方案。 专家们看到了将AI(人工智能)和ML(机器学习)应用于网页抓取的巨大潜力。 但是,直到最近,才采取行动使用AI应用程序进行数据收集自动化。 这也就不足为奇了,因为AI和ML算法直到最近几年才随着计算解决方案的进步而变得更加强大。
By applying AI-powered solutions in data gathering, we can help automate tedious manual work and ensure a much better quality of the collected data. To better grasp the struggles of web scraping, let’s look into the process of data gathering, its biggest challenges, and possible future solutions that might ease and potentially solve mentioned challenges.
通过在数据收集中应用基于AI的解决方案,我们可以帮助完成繁琐的手工工作,并确保所收集数据的质量更高。 为了更好地掌握Web抓取的工作,让我们研究数据收集的过程,最大的挑战以及将来可能缓解和潜在解决上述挑战的解决方案。
数据收集:逐步 (Data collection: step by step)
To better understand the web scraping process, it’s best to visualize it in a value chain:
为了更好地了解网络抓取过程,最好在价值链中对其进行可视化处理:
As you can see, web scraping takes up four distinct actions:
如您所见,Web抓取采取了四个不同的操作:
- Crawling path building and URL collection. 搜寻路径建立和URL收集。
- Scraper development and its support. 刮板的开发及其支持。
- Proxy acquisition and management. 代理获取和管理。
- Data fetching and parsing. 数据获取和解析。
Anything that goes beyond those terms is considered to be data engineering or part of data analysis.
超出这些术语的任何内容都被视为数据工程或数据分析的一部分。
By pinpointing which actions belong to the web scraping category, it becomes easier to find the most common data gathering challenges. It also allows us to see which parts can be automated and improved with the help of AI and ML powered solutions.
通过查明哪些动作属于Web抓取类别,可以更轻松地找到最常见的数据收集难题。 它还使我们能够看到哪些零件可以借助AI和ML支持的解决方案进行自动化和改进。
大规模刮刮挑战 (Large-scale scraping challenges)
Traditional data gathering from the web requires a lot of governance and quality assurance. Of course, the difficulties that come with data gathering increase together with the scale of the scraping project. Let’s dig a little deeper into the said challenges by going through our value chain’s actions and analyzing potential issues.
从网络收集传统数据需要大量的管理和质量保证。 当然,数据收集带来的困难随着抓取项目的规模而增加。 让我们通过价值链的行动并分析潜在问题,对上述挑战进行更深入的研究。
建立搜寻路径并收集URL (Building a crawling path and collecting URLs)
Building a crawling path is the first and essential part of data gathering. To put it simply, a crawling path is a library of URLs from which data will be extracted. The biggest challenge here is not the collection of the website URLs that you want to scrape, but obtaining all the necessary URLs of the initial targets. That could mean dozens, if not hundreds of URLs that will need to be scraped, parsed, and identified as important URLs for your case.
建立爬网路径是数据收集的首要且必不可少的部分。 简单来说,爬网路径是一个URL库,将从中提取数据。 这里最大的挑战不是您要抓取的网站URL的集合,而是获得初始目标的所有必需URL。 这可能意味着需要抓取,解析和标识数十个(如果不是数百个)URL,这对于您的案例而言是重要的URL。
刮板的开发及其维护 (Scraper development and its maintenance)
Building a scraper comes with a whole new set of issues. There are a lot of factors to look out for when doing so:
构建刮板会带来一系列全新问题。 这样做时要注意很多因素:
- Choosing the language, APIs, frameworks, etc. 选择语言,API,框架等。
- Testing out what you’ve built. 测试您的构建。
- Infrastructure management and maintenance. 基础架构管理和维护。
- Overcoming fingerprinting anti-measures. 克服指纹防措施。
- Rendering JavaScript-heavy websites at scale. 大规模渲染JavaScript繁重的网站。
These are just the tip of the iceberg that you will encounter when building a web scraper. There are plenty more smaller and time consuming things that will accumulate into larger issues.
这些只是构建网络刮板时遇到的冰山一角。 还有很多小而费时的事情会累积成更大的问题。
代理收购与管理 (Proxy acquisition and management)
Proxy management will be a challenge, especially to those new to scraping. There are so many little mistakes one can make to block batches of proxies until successfully scraping a site. Proxy rotation is a good practice, but it doesn’t illuminate all the issues and requires constant management and upkeep of the infrastructure. So if you are relying on a proxy vendor, a good and frequent communication will be necessary.
代理管理将是一个挑战,特别是对于那些刚开始使用的人。 在成功刮取站点之前,阻止批次代理存在很多小错误。 代理轮换是一种很好的做法,但是它不能说明所有问题,并且需要对基础架构进行持续的管理和维护。 因此,如果您依赖代理供应商,则需要进行良好且频繁的沟通。
数据获取和解析 (Data fetching and parsing)
Data parsing is the process of making the acquired data understandable and usable. While creating a parser might sound easy, its further maintenance will cause big problems. Adapting to different page formats and website changes will be a constant struggle and will require your developers teams’ attention more often than you can expect.
数据解析是使获取的数据易于理解和使用的过程。 尽管创建解析器听起来很容易,但对其进行进一步的维护将导致大问题。 适应不同的页面格式和网站更改将一直是一个难题,并且将需要您的开发团队更多的注意力。
As you can see, traditional web scraping comes with many challenges, requires a lot of manual labour, time, and resources. However, the brightside with computing is that almost all things can be automated. And as the development of AI and ML powered web scraping is emerging, creating a future-proof large-scale data gathering becomes a more realistic solution.
如您所见,传统的Web抓取面临许多挑战,需要大量的人工,时间和资源。 但是,计算的亮点是几乎所有事物都可以自动化。 随着AI和ML支持的Web抓取技术的发展不断涌现,创建面向未来的大规模数据收集已成为一种更为现实的解决方案。
使网页抓取永不过时 (Making web scraping future-proof)
In what way AI and ML can innovate and improve web scraping? According to Oxylabs Next-Gen Residential Proxy AI & ML advisory board member Jonas Kubilius, an AI researcher, Marie Sklodowska-Curie Alumnus, and Co-Founder of Three Thirds:
AI和ML以什么方式可以创新和改善网页抓取? 根据Oxylabs下一代住宅代理AI和ML顾问委员会成员Jonas Kubilius的说法,他是AI研究人员Marie Sklodowska-Curie Alumnus和“三分之三”的联合创始人:
“There are recurring patterns in web content that are typically scraped, such as how prices are encoded and displayed, so in principle, ML should be able to learn to spot these patterns and extract the relevant information. The research challenge here is to learn models that generalize well across various websites or that can learn from a few human-provided examples. The engineering challenge is to scale up these solutions to realistic web scraping loads and pipelines.”
“网络内容中经常会出现重复出现的模式,例如价格的编码和显示方式,因此,原则上,机器学习应该能够发现这些模式并提取相关信息。 这里的研究挑战是学习在各种网站上都能很好地概括的模型,或者可以从一些人类提供的示例中学习模型。 工程上的挑战是将这些解决方案扩展到实际的Web抓取负载和管道。 ”
Instead of manually developing and managing the scrapers code for each new website and URL, creating an AI and ML-powered solution will simplify the data gathering pipeline. This will take care of proxy pool management, data parsing maintenance, and other tedious work.
创建一个由AI和ML支持的解决方案将简化数据收集流程,而不是为每个新网站和URL手动开发和管理刮板代码。 这将负责代理池管理,数据解析维护以及其他繁琐的工作。
Not only does AI and ML-powered solutions enable developers to build highly scalable data extraction tools, but it also enables data science teams to prototype rapidly. It also stands as a backup to your existing custom-built code if it was ever to break.
由AI和ML支持的解决方案不仅使开发人员能够构建高度可扩展的数据提取工具,而且还使数据科学团队能够快速进行原型制作。 如果曾经破解过,它也可以作为现有定制代码的备份。
网页抓取的未来前景如何 (What the future holds for web scraping)
As we already established, creating fast data processing pipelines along with cutting edge ML techniques can offer an unparalleled competitive advantage in the web scraping community. And looking at today’s market, the implementation of AI and ML in data gathering has already started.
正如我们已经确定的那样,创建快速的数据处理管道以及最先进的ML技术可以在Web抓取社区中提供无与伦比的竞争优势。 纵观当今市场,已经开始在数据收集中实施AI和ML。
For this reason, Oxylabs is introducing Next-Gen Residential Proxies which are powered by the latest AI applications.
因此,Oxylabs推出了由最新的AI应用程序提供支持的下一代住宅代理 。
Next-Gen Residential Proxies were built with heavy-duty data retrieval operations in mind. They enable web data extraction without delays or errors. The product is as customizable as a regular proxy, but at the same time, it guarantees a much higher success rate and requires less maintenance. Custom headers and IP stickiness are both supported, alongside reusable cookies and POST requests. Its main benefits are:
下一代住宅代理的构建考虑了重型数据检索操作。 它们使Web数据提取没有延迟或错误。 该产品可以像常规代理一样进行自定义,但是同时,它可以确保更高的成功率并需要更少的维护。 支持自定义标头和IP粘性,以及可重用的cookie和POST请求。 它的主要优点是:
- 100% success rate 成功率100%
- AI-Powered Dynamic Fingerprinting (CAPTCHA, block, and website change handling) AI驱动的动态指纹识别(CAPTCHA,阻止和网站更改处理)
- Machine Learning based HTML parsing 基于机器学习HTML解析
- Easy integration (like any other proxy) 易于集成(像其他代理一样)
- Auto-Retry system 自动重试系统
- JavaScript rendering JavaScript渲染
- Patented proxy rotation system 专利代理旋转系统
Going back to our previous web scraping value chain, you can see which parts of web scraping can be automated and improved with AI and ML-powered Next-Gen Residential Proxies.
回到我们以前的网络抓取价值链,您可以看到可以使用AI和ML支持的下一代住宅代理来自动化和改进网络抓取的哪些部分。
The Next-Gen Residential Proxy solution automates almost the whole scraping process, making it a truly strong competitor for future-proof web scraping.
下一代住宅代理解决方案几乎可以自动化整个刮削过程,使其成为永不过时的网络刮削的真正强大竞争对手。
This project will be continuously developed and improved by Oxylabs in-house ML engineering team and a board of advisors, Jonas Kubilius, Adi Andrei, Pujaa Rajan, and Ali Chaudhry, specializing in the fields of Artificial Intelligence and ML engineering.
Oxylabs内部的ML工程团队和顾问委员会Jonas Kubilius , Adi Andrei , Pujaa Rajan和Ali Chaudhry将继续开发和改进此项目,该委员会专门研究人工智能和ML工程领域。
结语 (Wrapping up)
As the scale of web scraping projects increase, automating data gathering becomes a high priority for businesses that want to stay ahead of the competition. With the improvement of AI algorithms in recent years, along with the increase in compute power and the growth of the talent pool has made AI implementations possible in a number of industries, web scraping included.
随着网络抓取项目规模的扩大,对于希望保持竞争优势的企业而言,自动化数据收集已成为当务之急。 近年来,随着AI算法的改进,以及计算能力的提高和人才库的增长,使得许多行业都可以实施AI,其中包括Web抓取。
Establishing AI and ML-powered data gathering techniques offers a great competitive advantage in the industry, as well as save copious amounts of time and resources. It is the new future of large-scale web scraping, and a good head start of the development of future-proof solutions.
建立由AI和ML支持的数据收集技术在行业中提供了巨大的竞争优势,并且节省了大量的时间和资源。 这是大规模刮网的新未来,也是开发面向未来的解决方案的良好开端。
翻译自: https://towardsdatascience.com/the-new-beginnings-of-ai-powered-web-data-gathering-solutions-a8e95f5e1d3f
ai驱动数据安全治理
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/389361.shtml
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!