当需要监控硬盘读写和网络传输时,我们可以进一步使用psutil
库来获取这些信息。以下是修改后的示例程序,增加了对硬盘读写和网络传输的监控:
import psutil
import json
import timedef get_process_usage():process_list = []for proc in psutil.process_iter(['pid', 'name', 'username', 'cpu_percent', 'memory_percent']):process_info = proc.infoprocess_list.append({'pid': process_info['pid'],'name': process_info['name'],'username': process_info['username'],'cpu_percent': process_info['cpu_percent'],'memory_percent': process_info['memory_percent']})return process_listdef get_system_usage():cpu_percent = psutil.cpu_percent()memory_percent = psutil.virtual_memory().percentdisk_usage = psutil.disk_usage('/').percentnet_io = psutil.net_io_counters()network_usage = {'bytes_sent': net_io.bytes_sent,'bytes_received': net_io.bytes_recv}return {'cpu_percent': cpu_percent,'memory_percent': memory_percent,'disk_percent': disk_usage,'network_usage': network_usage}def main():while True:system_usage = get_system_usage()process_usage = get_process_usage()data = {'system': system_usage,'processes': process_usage}json_data = json.dumps(data, indent=4)# 输出JSON数据print(json_data)# 保存JSON数据到文件with open('system_monitor.json', 'w') as file:file.write(json_data)time.sleep(1)if __name__ == "__main__":main()
需要对JSON进行压缩后再写入文件时,可以使用Python的gzip
库来实现。gzip
库可以用于对数据进行压缩和解压缩。以下是修改后的示例程序,将JSON数据压缩后再保存到文件
import psutil
import json
import gzip
import timedef get_process_usage():process_list = []for proc in psutil.process_iter(['pid', 'name', 'username', 'cpu_percent', 'memory_percent']):process_info = proc.infoprocess_list.append({'pid': process_info['pid'],'name': process_info['name'],'username': process_info['username'],'cpu_percent': process_info['cpu_percent'],'memory_percent': process_info['memory_percent']})return process_listdef get_system_usage():cpu_percent = psutil.cpu_percent()memory_percent = psutil.virtual_memory().percentdisk_usage = psutil.disk_usage('/').percentnet_io = psutil.net_io_counters()network_usage = {'bytes_sent': net_io.bytes_sent,'bytes_received': net_io.bytes_recv}return {'cpu_percent': cpu_percent,'memory_percent': memory_percent,'disk_percent': disk_usage,'network_usage': network_usage}def main():while True:system_usage = get_system_usage()process_usage = get_process_usage()data = {'system': system_usage,'processes': process_usage}json_data = json.dumps(data, indent=4)# 压缩JSON数据compressed_data = gzip.compress(json_data.encode())# 保存压缩后的数据到文件with gzip.open('system_monitor.json.gz', 'wb') as file:file.write(compressed_data)time.sleep(1)if __name__ == "__main__":main()
若您想监控每个进程对硬盘读写和网络资源的情况,可以对之前的示例程序进行进一步改进。我们可以使用psutil
库的io_counters()
方法来获取每个进程的硬盘I/O信息,并使用psutil.net_io_counters(pernic=True)
来获取每个进程的网络传输信息。以下是修改后的示例程序:
import psutil
import json
import gzip
import timedef get_process_usage():process_list = []for proc in psutil.process_iter(['pid', 'name', 'username', 'cpu_percent', 'memory_percent']):process_info = proc.info# 获取进程的硬盘IO信息和网络IO信息io_counters = proc.io_counters()net_io_counters = psutil.net_io_counters(pernic=True).get(proc.info["username"])process_list.append({'pid': process_info['pid'],'name': process_info['name'],'username': process_info['username'],'cpu_percent': process_info['cpu_percent'],'memory_percent': process_info['memory_percent'],'disk_io': {'read_bytes': io_counters.read_bytes,'write_bytes': io_counters.write_bytes},'network_io': {'bytes_sent': net_io_counters.bytes_sent,'bytes_received': net_io_counters.bytes_recv}})return process_listdef get_system_usage():cpu_percent = psutil.cpu_percent()memory_percent = psutil.virtual_memory().percentdisk_usage = psutil.disk_usage('/').percentnet_io = psutil.net_io_counters()network_usage = {'bytes_sent': net_io.bytes_sent,'bytes_received': net_io.bytes_recv}return {'cpu_percent': cpu_percent,'memory_percent': memory_percent,'disk_percent': disk_usage,'network_usage': network_usage}def main():while True:system_usage = get_system_usage()process_usage = get_process_usage()data = {'system': system_usage,'processes': process_usage}json_data = json.dumps(data, indent=4)# 压缩JSON数据compressed_data = gzip.compress(json_data.encode())# 保存压缩后的数据到文件with gzip.open('system_monitor.json.gz', 'wb') as file:file.write(compressed_data)time.sleep(1)if __name__ == "__main__":main()
当将1、2、3三个功能整合到一起时,我们需要对每个功能的获取数据的部分进行整合,然后将获取的数据合并为一个综合的JSON对象。以下是整合后的示例程序,可以同时监控系统资源、系统传感器数据以及所有进程的CPU和内存资源占用情况,并将结果输出成一个综合的JSON文件:
import psutil
import sensors
import json
import gzip
import timedef get_sensor_data():sensors.init()sensor_data = {}for chip in sensors.iter_detected_chips():for feature in chip:if feature.label:sensor_data[feature.label] = {'value': feature.get_value(),'unit': feature.unit.decode()}sensors.cleanup()return sensor_datadef get_all_processes_usage():process_list = []for proc in psutil.process_iter(['pid', 'name', 'cpu_percent', 'memory_percent']):process_info = proc.infoprocess_list.append({'pid': process_info['pid'],'name': process_info['name'],'cpu_percent': process_info['cpu_percent'],'memory_percent': process_info['memory_percent']})return process_listdef get_system_usage():cpu_percent = psutil.cpu_percent(interval=1)memory_percent = psutil.virtual_memory().percentdisk_io = psutil.disk_io_counters()network_io = psutil.net_io_counters()return {'cpu_percent': cpu_percent,'memory_percent': memory_percent,'disk_io': {'read_bytes': disk_io.read_bytes,'write_bytes': disk_io.write_bytes},'network_io': {'bytes_sent': network_io.bytes_sent,'bytes_received': network_io.bytes_recv}}def main():while True:system_usage = get_system_usage()sensor_data = get_sensor_data()all_processes_usage = get_all_processes_usage()data = {'system': system_usage,'sensor_data': sensor_data,'processes': all_processes_usage}json_data = json.dumps(data, indent=4)# 压缩JSON数据compressed_data = gzip.compress(json_data.encode())# 输出JSON数据print(json_data)# 保存压缩后的数据到文件with gzip.open('system_monitor.json.gz', 'wb') as file:file.write(compressed_data)time.sleep(1)if __name__ == "__main__":main()
在上述程序中,我们分别调用了 get_sensor_data()
、get_all_processes_usage()
和 get_system_usage()
函数,分别获取系统传感器数据、所有进程的CPU和内存资源占用情况以及系统资源情况。然后,将这些数据合并为一个综合的JSON对象,并输出到控制台并保存到名为 system_monitor.json.gz
的压缩文件中。
请注意,由于系统传感器数据和进程资源占用情况是实时变化的,因此您可能会看到输出在不同时间点有所不同。该程序将不断循环输出数据,您可以手动停止程序的执行。