31.网站数据监控-1

news/2024/5/20 21:21:10/文章来源:https://blog.csdn.net/weixin_33975951/article/details/94342668
31.网站数据监控-1
网站数据监控:

思路:对网站数据字段监控并做MD5加密存入到数据,为了后期监控数据是否更新作比对,然后邮件通知,存入四个字段。

引入的包:

md5_tools.py
# -*- coding:utf-8 -*-
import hashlib

# md5 加密
def md5_encode(md5):
md5 = md5
hash = hashlib.md5()
hash.update(bytes(md5, encoding='utf-8')) # 要对哪个字符串进行加密,就放这里
return hash.hexdigest() # 拿到加密字符串
wenzhou.py

#
-*- coding: utf-8 -*- import scrapy import time import re from WEB.conmon.md5_tool import md5_encodefrom WEB.items import WebItemclass CompanyInfoSpider(scrapy.Spider):name = 'wenzhou'allowed_domains = ['wzszjw.wenzhou.gov.cn']start_urls = ['http://wzszjw.wenzhou.gov.cn/col/col1357901/index.html']custom_settings = {"DOWNLOAD_DELAY": 0.5,"ITEM_PIPELINES":{'WEB.pipelines.MysqlPipeline': 320},"DOWNLOADER_MIDDLEWARES": {'WEB.middlewares.RandomUaseragentMiddleware': 500,},}def parse(self, response):#gbk解码_response=response.text.encode('utf-8')# print(_response)# 转码_response=_response.decode('utf-8')texts=re.findall("<span>.*?</span><b>&middot;</b><a href=\'.*?\'",_response)str = ""for text in texts:str = str + "".join(text)# print(str)text_md5 = md5_encode(str)item = WebItem()item["website_name"] = "温州市建设工程造价管理处"item["website_url"] = response.urlitem["content_md5"] = text_md5item["date_time"] = time.time()print(item)yield item
items.py

#
-*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass WebItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()content_md5 = scrapy.Field() # 监控文本website_url = scrapy.Field() # 采集页面urlwebsite_name = scrapy.Field() # 网站名称date_time = scrapy.Field() # 当前时间戳
piplines.py

#
-*- coding: utf-8 -*- from scrapy.conf import settings import pymysqlclass WebPipeline(object):def process_item(self, item, spider):return item# 数据保存mysql class MysqlPipeline(object):def open_spider(self, spider):self.host = settings.get('MYSQL_HOST')self.port = settings.get('MYSQL_PORT')self.user = settings.get('MYSQL_USER')self.password = settings.get('MYSQL_PASSWORD')self.db = settings.get(('MYSQL_DB'))self.table = settings.get('TABLE')self.client = pymysql.connect(host=self.host, user=self.user, password=self.password, port=self.port, db=self.db, charset='utf8')def process_item(self, item, spider):item_dict = dict(item)cursor = self.client.cursor()values = ','.join(['%s'] * len(item_dict))keys = ','.join(item_dict.keys())sql = 'INSERT INTO {table}({keys}) VALUES ({values})'.format(table=self.table, keys=keys, values=values)try:if cursor.execute(sql, tuple(item_dict.values())): # 第一个值为sql语句第二个为 值 为一个元组print('数据入库成功!')self.client.commit()except Exception as e:print(e)print('数据已存在,网站未更新!')self.client.rollback()return itemdef close_spider(self, spider):self.client.close()
setting.py

#
-*- coding: utf-8 -*-# Scrapy settings for WEB project # # For simplicity, this download contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://doc.scrapy.org/en/latest/topics/settings.html # https://doc.scrapy.org/en/latest/topics/downloader-middleware.html # https://doc.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'WEB'SPIDER_MODULES = ['WEB.spiders'] NEWSPIDER_MODULE = 'WEB.spiders'# mysql配置参数 MYSQL_HOST = "172.16.0.55" MYSQL_PORT = 3306 MYSQL_USER = "root" MYSQL_PASSWORD = "concom603" MYSQL_DB = 'web_page' TABLE = "web_page_update"# Crawl responsibly by identifying yourself (and your website) on the user-agent #USER_AGENT = 'WEB (+http://www.yourdomain.com)'# Obey robots.txt rules ROBOTSTXT_OBEY = False# Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0) # See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default) #COOKIES_ENABLED = False# Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False# Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #}# Enable or disable spider middlewares # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'WEB.middlewares.WebSpiderMiddleware': 543, #}# Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'WEB.middlewares.WebDownloaderMiddleware': 543, #}# Enable or disable extensions # See https://doc.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #}# Configure item pipelines # See https://doc.scrapy.org/en/latest/topics/item-pipeline.html #ITEM_PIPELINES = { # 'WEB.pipelines.WebPipeline': 300, #}# Enable and configure the AutoThrottle extension (disabled by default) # See https://doc.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default) # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
middlewares.py

#
-*- coding: utf-8 -*-# Define here the models for your spider middleware # # See documentation in: # https://doc.scrapy.org/en/latest/topics/spider-middleware.html import randomfrom scrapy import signalsclass WebSpiderMiddleware(object):# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the spider middleware does not modify the# passed objects. @classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_spider_input(self, response, spider):# Called for each response that goes through the spider# middleware and into the spider.# Should return None or raise an exception.return Nonedef process_spider_output(self, response, result, spider):# Called with the results returned from the Spider, after# it has processed the response.# Must return an iterable of Request, dict or Item objects.for i in result:yield idef process_spider_exception(self, response, exception, spider):# Called when a spider or process_spider_input() method# (from other spider middleware) raises an exception.# Should return either None or an iterable of Response, dict# or Item objects.passdef process_start_requests(self, start_requests, spider):# Called with the start requests of the spider, and works# similarly to the process_spider_output() method, except# that it doesn’t have a response associated.# Must return only requests (not items).for r in start_requests:yield rdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)class WebDownloaderMiddleware(object):# Not all methods need to be defined. If a method is not defined,# scrapy acts as if the downloader middleware does not modify the# passed objects. @classmethoddef from_crawler(cls, crawler):# This method is used by Scrapy to create your spiders.s = cls()crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)return sdef process_request(self, request, spider):# Called for each request that goes through the downloader# middleware.# Must either:# - return None: continue processing this request# - or return a Response object# - or return a Request object# - or raise IgnoreRequest: process_exception() methods of# installed downloader middleware will be calledreturn Nonedef process_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequestreturn responsedef process_exception(self, request, exception, spider):# Called when a download handler or a process_request()# (from other downloader middleware) raises an exception.# Must either:# - return None: continue processing this exception# - return a Response object: stops process_exception() chain# - return a Request object: stops process_exception() chainpassdef spider_opened(self, spider):spider.logger.info('Spider opened: %s' % spider.name)# 随机更换 USER_AGENT class RandomUaseragentMiddleware(object):def __init__(self):self.user_agent = ["Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.835.163 Safari/535.1","Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20100101 Firefox/6.0","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.27 (KHTML, like Gecko) Chrome/12.0.712.0 Safari/534.27","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.514.0 Safari/534.7","Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6 GTB5","Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0 )","HTC_Dream Mozilla/5.0 (Linux; U; Android 1.5; en-ca; Build/CUPCAKE) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1","Mozilla/5.0 (Linux; U; Android 2.3.6; en-us; Nexus S Build/GRK39F) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1","Mozilla/5.0 (Linux; U; Android 2.1; en-us; HTC Legend Build/cupcake) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17","Mozilla/5.0 (Linux; U; Android 2.0; en-us; Milestone Build/ SHOLS_U2_01.03.1) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; TheWorld)","Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET4.0E; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C)","Opera/9.80 (Windows NT 5.1; U; zh-cn) Presto/2.9.168 Version/11.50"]def process_request(self, request, spider):request.headers["User-Agent"] = random.choice(self.user_agent)

 

posted on 2018-09-25 16:24 五杀摇滚小拉夫 阅读(...) 评论(...) 编辑 收藏

转载于:https://www.cnblogs.com/lvjing/p/9700172.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.luyixian.cn/news_show_726744.aspx

如若内容造成侵权/违法违规/事实不符,请联系dt猫网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

基于LNMP的小米电商网站搭建——运维实战手记第一篇

分享一篇文章&#xff0c;来自马哥教育学员阿龙。总项目流程图&#xff0c;详见Linux运维企业架构实战系列 - 阿龙along - 博客园实战一&#xff1a;搭建lnmp及类小米等商业网站的实现环境&#xff1a;关闭防火墙&#xff0c;selinux1、安装包&#xff0c;开启服务yum -y insta…

实现网站页面的QQ临时会话,分享到空间微博等按钮.

一 qq临时会话要实现qq临时会话首先要到qq在线状态官网开通qq在线状态&#xff0c;其中临时对话也分为加密和未加密。1.1:加密模式<a target"_blank" href"http://sighttp.qq.com/authd?IDKEY9a2ea740a2af0f88c15eb511395e2460bd84bf549dd66365">&l…

服务器物理分离,不同功能进行物理分离实现网站架构的伸缩性设计

回顾网站架构发展历程&#xff0c;网站架构发展史就是一部不断向网站增加服务器的历史。只要工程师能向网站的服务器集群中增加新的机器&#xff0c;只要新增加的服务器能线性提高网站的整体服务处理能力&#xff0c;网站就无需为不断增长的用户和访问而焦虑。一般说来&#xf…

java 网站cookie_JavaWeb之Cookie和Session

七、Cookie和Session目录&#xff1a;会话、Cookie、Session(重点)1.会话会话&#xff1a;用户打开一个浏览器&#xff0c;点击了很多超链接&#xff0c;访问多个Web资源&#xff0c;关闭浏览器&#xff0c;这个过程可以称之为会话。有状态会话&#xff1a;一位用户访问过某网站…

SEO优化如何让提高收录量

相信在做SEO优化的网络人员来说&#xff0c;最高兴的时候就是看见自己辛苦打出来的文章被收录了。 可是&#xff0c;收录量就有点。。。。 今天&#xff0c;我就来教教大家怎么提高收录量&#xff0c;主要分四个方面。 1、收录网站的时间间隔 网站的重点是网站页面收录之间的时…

网站被黑提醒该站点可能受到黑客攻击,部分页面已被非法篡改...

大清早的一上班收到3个网站客户的QQ联系&#xff0c;说是自己公司的网站被跳转到了北京sai车&#xff0c;cai票&#xff0c;du博网站上去了&#xff0c;我们SINE安全公司对3个网站进行了详细的安全检测&#xff0c;发现这3个客户的网站都是同样的症状&#xff0c;网站首页文件i…

网站SEO优化如何让百度搜索引擎绝的你的网站更有抓取和收录价值呢?_孙森SEO...

今天孙森SEO为大家唠唠网站到底该如何优化才会让百度搜索引擎绝的你的网站更有抓取和收录价值呢&#xff1f; 第一方面&#xff1a;网站创造高品质的内容&#xff0c;可以为用户提供独特的价值. 1.百度作为搜索引擎&#xff0c;网站内容必须满足 搜索用户的需求 &#xff0c;现…

大型网站技术架构(七)网站的可扩展性架构

2019独角兽企业重金招聘Python工程师标准>>> 扩展性是指对现有系统影响最小的情况下&#xff0c;系统功能可持续扩展或提升的能力。 设计网站可扩展架构的核心思想是模块化&#xff0c;并在此基础上&#xff0c;降低模块间的耦合性&#xff0c;提供模块的复用性。模…

静态网站生成和部署(一)静态网站生成器 Gridsome

静态网站生成器 什么是静态网站生成器 使用一系列配置、模板以及数据&#xff0c;生成静态 HTML 文件及相关资源的工具由于它用来提前生成静态网页&#xff0c;所以这个功能也称为 预渲染生成的网站不需要类似 PHP 这样的服务器去部署运行只需要放到支持静态资源的 Web Serve…

静态网站生成和部署(二)项目构成和部署(Strapi + Vercle)

项目介绍 使用 Gridsome 创建项目模板参考 startbootstrap-clean-blog使用 Strapi 管理内容&#xff0c;提供 API 和 GraphQL查询 Gridsome 是静态站点生成器&#xff0c;打包时可以通过 API 或 GraphQL 获取动态数据&#xff0c;生成对应的静态文件。 例如&#xff1a;获取…

低级html5css3模板,45个高质量的 HTML5 和 CSS3 网站模板

今天为大家分享45个高质量的 HTML5 和 CSS3 网站模板&#xff0c;这将使你的网站开发更容易&#xff0c;更有趣。CSS3和HTML5是要设计一个非常时尚和新潮的网页设计之前必须要使用的方法&#xff0c;CSS3和HTML5提供的功能为用户提供强大功能。1. Vivid Photo: Free Photograph…

mysql数据库登陆地址_Php网站MySQL数据库四点:地址、数据库名、用户名和密码如何获取?...

我们每天都在使用的 wordpress网站 MySQL数据库四点&#xff1a;MySQL数据库地址、数据库、用户名和密码。 每个网站连接数据库都需要正确填写这四点&#xff0c;否则无法访问数据库。本文中老魏以 wordpress和宝塔面板为例&#xff0c;讲解如何获取数据库四点。本文主要是针对…

钉钉扫码登录网站(两种方式实现)

钉钉扫码登录网站(两种方式实现) 效果&#xff1a; 源代码地址&#xff1a;https://github.com/jellydong/DingQrCodeLogin 动手敲代码&#xff01; 第一步&#xff0c;钉钉后台配置 参考链接&#xff1a;获取appId及appSecret. 点击进入钉钉开发者平台 的页面&#xff0c;点…

网站架构发展历程

注&#xff1a;本文摘自李智慧的《大型网站技术架构》 1.初始阶段的网站架构 小型互联网公司一般在最初阶段都是将&#xff1a;应用程序、数据库、文件等所有的资源都在一台服务器上。通常服务器操作系统使用Linux&#xff0c;应用程序使用PHP开发&#xff0c;然后部署在Apach…

c#http服务器源码,C# 自定义 HTTP服务器源码(可用于没有iis的情况下搭建站点)

HTTP服务器源码&#xff0c;可以实现POST模式收发&#xff0c;非常值得参考资源下载此资源下载价格为2D币&#xff0c;请先登录资源文件列表C# WebServer/bins/bins/BizApplication.dll , 21504C# WebServer/bins/bins/ComunicationLayer.dll , 5632C# WebServer/bins/bins/Dem…

tcp网站服务器编写,C#编写简单TCP服务器

【实例简介】【实例截图】【核心代码】using System;using System.Collections.Generic;using System.Linq;using System.Net;using System.Net.Sockets;using System.Text;using System.Threading.Tasks;namespace TCP{class Program{static void Main(string[] args){Socket …

wordpress怎么搜索别人_wordpress深层seo优化技法:自定义栏目和tag标签页面title标题...

seo注重整站优化&#xff0c;也就是希望网站每个页面都参与进来&#xff0c;每个页面都有不同的title标题不重复。对于wordpress默认设置来说&#xff0c;这个基本上是ok的&#xff0c;不需要怎么折腾。但碰到一个问题是&#xff1a;网站栏目及标签页&#xff0c;承载着一些重要…

解决火车头7.6版本对一些https网站的采集报错问题

针对火车头7.6爬取https网站报错System.Net.HttpWebRequest问题的处理方案 1.先看看火车头7.6爬取https网站时出现的报错情况 System.Net.HttpWebRequest 2.废话不多说&#xff0c;先看看解决方案的效果&#xff0c;可以看到已经没有报错了&#xff0c;而且标题也采集到了(其它…

伪静态隐藏域名后缀_seo新手如何注册利于网站优化的域名?立刻分享

如何注册域名&#xff1f;选择比较好的域名服务商&#xff0c;输入合理的组成&#xff0c;选择适合的后缀。选中购买&#xff0c;支付成功后&#xff0c;就算彻底完成域名注册了。1、域名服务商简单来说&#xff0c;域名服务商&#xff0c;就是指能注册域名的地方。不同的服务商…

2012php网站搭建,Windows Server 2012 搭建PHP+MySQL环境

随着windows 8的发布&#xff0c;其服务器版本Windows Server 2012也相继推出&#xff0c;本篇将全面介绍如何在Windows Server 2012搭建phpmysql环境&#xff0c;同时也将介绍如何在Windows Server 2012中设定服务器站点的安全。1.添加IIS服务器角色点击任务栏左下角的“服务器…