python爬取电子病历_利用 BERT 模型解析电子病历

项目原始地址

项目地址

本项目改编自此 Github 项目,鸣谢作者。

问题描述

我们希望能从患者住院期间的临床记录来预测该患者未来30天内是否会再次入院,该预测可以辅助医生更好的选择治疗方案并对手术风险进行评估。在临床中治疗手段常见而预后情况难以控制管理的情况屡见不鲜。比如关节置换手术作为治疗老年骨性关节炎等疾病的最终方法在临床中取得了极大成功,但是与手术相关的并发症以及由此导致的再入院情况也并不少见。患者的自身因素如心脏病、糖尿病、肥胖等情况也会增加关节置换术后的再入院风险。当接受关节置换手术的人群的年龄越来越大,健康状况越来越差的情况下,会出现更多的并发症并且增加再次入院风险。

通过电子病历的相关记录,观察到对于某些疾病或者手术来说,30天内再次入院的患者各方面的风险都明显增加。因此对与前次住院原因相同,且前次出院与下次入院间隔未超过30天的再一次住院视为同一次住院的情况进行了筛选标注,训练模型来尝试解决这个问题。

数据选取与数据清洗

选取于 Medical Information Mart for Intensive Care III 数据集,也称 MIMIC-III,是在NIH资助下,由MIT、哈佛医学院BID医学中心、飞利浦医疗联合开发维护的多参数重症监护数据库。该数据集免费向研究人员开放,但是需要进行申请。我们在进行实验的时候将数据部署在 Postgre SQL 中。首先从admission表中取出所有数据,针对每一条记录计算同个subject_id下一次出现时的时间间隔,若小于30天则给该条记录添加标签Label=1,否则Label=0。然后再计算该次住院的时长(出院日期-入院日期),并抽取其中住院时长>2的样本。将上述抽出的所有样本的HADM_ID按照0.8:0.1:0.1的比例随机分配形成训练集、验证集和测试集。之后再从noteevents表中按照之前分配好的HADM_ID获取各个数据集的文本内容(即表noteevents中的TEXT列)。整理好的训练集、验证集和测试集均含有三列,分别为TEXT(文本内容),ID(即HADM_ID),Label(0或1)。

预训练模型

原项目使用的预训练模型。基于 BERT 训练。在NLP(自然语言处理)领域BERT模型有着里程碑式的意义。2018年的10月11日,Google发布的论文《Pre-training of Deep Bidirectional Transformers for Language Understanding》,成功在 11 项 NLP 任务中取得 state of the art 的结果,赢得自然语言处理学界的一片赞誉之声。BERT模型在文本分类、文本预测等多个领域都取得了很好的效果。

更多关于BERT模型的内容可参考链接

BERT算法的原理主要由两部分组成:第一步,通过对大量未标注的语料进行非监督的预训练,来学习其中的表达法。

其次,使用少量标记的训练数据以监督方式微调(fine tuning)预训练模型以进行各种监督任务。

ClinicalBERT 模型根据含有标记的临床记录对BERT模型进行微调,从而得到一个可以用于医疗领域文本分析的模型。细节请参考原项目链接

环境安装!pip install -U pytorch-pretrained-bert -i https://pypi.tuna.tsinghua.edu.cn/simple

from IPython.core.interactiveshell import InteractiveShell

InteractiveShell.ast_node_interactivity='all'

数据查看

让我们来看看被预测的数据是什么格式import pandas as pd

sample = pd.read_csv('/home/input/MIMIC_note3519/BERT/sample.csv')

sampleTEXT

ID

Label

0

Nursing Progress Note 1900-0700 hours:\n** Ful...

176088

1

1

Nursing Progress Note 1900-0700 hours:\n** Ful...

135568

1

2

NPN:\n\nNeuro: Alert and oriented X2-3, Sleepi...

188180

0

3

RESPIRATORY CARE:\n\n35 yo m adm from osh for ...

110655

0

4

NEURO: A+OX3 pleasant, mae, following commands...

139362

0

5

Nursing Note\nSee Flowsheet\n\nNeuro: Propofol...

176981

0

可以看到在 TEXT 字段下存放了几条非结构的文本数据,让我们来取出一条看看在说什么。text = sample['TEXT'][0]

print(text)Nursing Progress Note 1900-0700 hours:

** Full code

** allergy: nkda

** access: #18 piv to right FA, #18 piv to right FA.

** diagnosis: angioedema

In Brief: Pt is a 51yo F with pmh significant for: COPD, HTN, diabetes insipidus, hypothyroidism, OSA (on bipap at home), restrictive lung disease, pulm artery hypertension attributed to COPD/OSA, ASD with shunt, down syndrome, CHF with LVEF >60%. Also, 45pk-yr smoker (quit in [**2112**]).

Pt brought to [**Hospital1 2**] by EMS after family found with decreased LOC. Pt presented with facial swelling and mental status changes. In [**Name (NI) **], pt with enlarged lips and with sats 99% on 2-4l. Her pupils were pinpoint so given narcan. She c/o LLQ abd pain and also developed a severe HA. ABG with profound resp acidosis 7.18/108/71. Given benadryl, nebs, solumedrol. Difficult intubation-req'd being taken to OR to have fiberoptic used. Also found to have ARF. On admit to ICU-denied pain in abdomen, denied HA. Denied any pain. Pt understands basic english but also used [**Name (NI) **] interpretor to determine these findings. Head CT on [**Name6 (MD) **] [**Name8 (MD) 20**] md as pt was able to nod yes and no and follow commands.

NEURO: pt is sedate on fent at 50mcg/hr and versed at 0.5mg/hr-able to arouse on this level of sedation. PEARL 2mm/brisk. Able to move all ext's, nod yes and no to questions. Occasional cough.

CARDIAC: sb-nsr with hr high 50's to 70's. Ace inhibitors (pt takes at home) on hold right now as unclear as to what meds or other cause of angioedema. no ectopy. SBP >100 with MAPs > 60.

RESP: nasally intubated. #6.0 tube which is sutured in place. Confirmed by xray for proper placement (5cm above carina). ** some resp events overnight: on 3 occasions thus far, pt noted to have vent alarm 'apnea' though on AC mode and then alarms 'pressure limited/not constant'. At that time-pt appears comfortably sedate (not bucking vent) but dropping TV's into 100's (from 400's), MV to 3.0 and then desats to 60's and 70's with no chest rise and fall noted. Given 100% 02 first two times with immediate elevation of o2 sat to >92%. The third time RT ambubagged to see if it was difficult-also climbed right back up to sat >93%. Suctioned for scant sputum only. ? as to whether tube was kinking off in trachea or occluding somehow. RT also swapped out the vent for a new one in case [**Last Name **] problem. Issue did occur again with new vent (so ruled out a [**Last Name **] problem). Several ABGs overnight (see carevue) which last abg stable. Current settings: 50%/ tv 400/ac 22/p5. Lungs with some rhonchi-received MDI's/nebs overnight. IVF infusing (some risk for chf) Sats have been >93% except for above events. cont to assess.

GI/GU: abd soft, distended, obese. two small bm's this shift-brown, soft, loose. Pt without FT and unlikely to have one placed [**3-3**] edema. IVF started for ARF and [**3-3**] without nutrition. Foley in place draining clear, yellow 25-80cc/hr.

ID: initial wbc of 12. Pt spiked temp overnight to 102.1-given tylenol supp (last temp 101.3) and pan cx'd. no abx at this time.

[**Month/Day (2) **]: fs wnl

文本内容

可以看到是一段 ICU 的护理日记,是一个 51 岁的女性,有慢性阻塞性肺疾病,高血压,甲减,唐氏综合征,先心房缺,慢性心衰,肺动脉高压,睡眠呼吸暂停综合症等多种疾病。被家人发现昏迷后送医,是严重的过敏反应,急性血管水肿。处于镇静状态有轻微意识。她在治疗过的过程中发生过好凝,做过溶拴还发生过急性肾衰竭。

模型推理

修改当前工作路径import os

os.chdir('/home/work/clinicalBERT')

基础类定义

每个类的说明见注释import csv

import pandas as pd

class InputExample(object):

"""A single training/test example for simple sequence classification."""

def __init__(self, guid, text_a, text_b=None, label=None):

"""Constructs a InputExample.

Args:

guid: Unique id for the example.

text_a: string. The untokenized text of the first sequence. For single

sequence tasks, only this sequence must be specified.

text_b: (Optional) string. The untokenized text of the second sequence.

Only must be specified for sequence pair tasks.

label: (Optional) string. The label of the example. This should be

specified for train and dev examples, but not for test examples.

"""

self.guid = guid

self.text_a = text_a

self.text_b = text_b

self.label = label

class InputFeatures(object):

"""A single set of features of data."""

def __init__(self, input_ids, input_mask, segment_ids, label_id):

self.input_ids = input_ids

self.input_mask = input_mask

self.segment_ids = segment_ids

self.label_id = label_id

class DataProcessor(object):

"""Base class for data converters for sequence classification data sets."""

def get_labels(self):

"""Gets the list of labels for this data set."""

raise NotImplementedError()

@classmethod

def _read_tsv(cls, input_file, quotechar=None):

"""Reads a tab separated value file."""

with open(input_file, "r") as f:

reader = csv.reader(f, delimiter="\t", quotechar=quotechar)

lines = []

for line in reader:

lines.append(line)

return lines

@classmethod

def _read_csv(cls, input_file):

"""Reads a comma separated value file."""

file = pd.read_csv(input_file)

lines = zip(file.ID, file.TEXT, file.Label)

return lines

定义数据读取与处理类

继承自基类def create_examples(lines, set_type):

"""Creates examples for the training and dev sets."""

examples = []

for (i, line) in enumerate(lines):

guid = "%s-%s" % (set_type, i)

text_a = line[1]

label = str(int(line[2]))

examples.append(

InputExample(guid=guid, text_a=text_a, text_b=None, label=label))

return examples

class ReadmissionProcessor(DataProcessor):

def get_test_examples(self, data_dir):

return create_examples(

self._read_csv(os.path.join(data_dir, "sample.csv")), "test")

def get_labels(self):

return ["0", "1"]

定义脚手架函数truncate_seq_pair

convert_examples_to_features

vote_score

pr_curve_plot

vote_pr_curvedef truncate_seq_pair(tokens_a, tokens_b, max_length):

"""Truncates a sequence pair in place to the maximum length."""

# This is a simple heuristic which will always truncate the longer sequence

# one token at a time. This makes more sense than truncating an equal percent

# of tokens from each, since if one sequence is very short then each token

# that's truncated likely contains more information than a longer sequence.

while True:

total_length = len(tokens_a) + len(tokens_b)

if total_length <= max_length:

break

if len(tokens_a) > len(tokens_b):

tokens_a.pop()

else:

tokens_b.pop()# 将文件载入,并且转换为张量

import logging

logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',

datefmt='%m/%d/%Y %H:%M:%S',

level=logging.INFO)

logger = logging.getLogger(__name__)

def convert_examples_to_features(examples, label_list, max_seq_length, tokenizer):

"""Loads a data file into a list of `InputBatch`s."""

label_map = {}

for (i, label) in enumerate(l label_id = label_map[example.label]

if ex_index < 5:

logger.info("*** Example ***")

logger.info("guid: %s" % (example.guid))

logger.info("tokens: %s" % " ".join(

[str(x) for x in tokens]))

… label_id=label_id))

return featuresabel_list):

label_map[label] = i

features = []

for (ex_index, example) in enumerate(examples):

tokens_a = tokenizer.tokenize(example.text_a)

tokens_b = None

if example.text_b:

tokens_b = tokenizer.tokenize(example.text_b)

if tokens_b:

# Modifies `tokens_a` and `tokens_b` in place so that the total

# length is less than the specified length.

# Account for [CLS], [SEP], [SEP] with "- 3"

truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)

else:

# Account for [CLS] and [SEP] with "- 2"

if len(tokens_a) > max_seq_length - 2:

tokens_a = tokens_a[0:(max_seq_length - 2)]

# The convention in BERT is:

# (a) For sequence pairs:

# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1

# (b) For single sequences:

# tokens: [CLS] the dog is hairy . [SEP]

# type_ids: 0 0 0 0 0 0 0

#

# Where "type_ids" are used to indicate whether this is the first

# sequence or the second sequence. The embedding vectors for `type=0` and

# `type=1` were learned during pre-training and are added to the wordpiece

# embedding vector (and position vector). This is not *strictly* necessary

# since the [SEP] token unambigiously separates the sequences, but it makes

# it easier for the model to learn the concept of sequences.

#

# For classification tasks, the first vector (corresponding to [CLS]) is

# used as as the "sentence vector". Note that this only makes sense because

# the entire model is fine-tuned.

tokens = []

segment_ids = []

tokens.append("[CLS]")

segment_ids.append(0)

for token in tokens_a:

tokens.append(token)

segment_ids.append(0)

tokens.append("[SEP]")

segment_ids.append(0)

if tokens_b:

for token in tokens_b:

tokens.append(token)

segment_ids.append(1)

tokens.append("[SEP]")

segment_ids.append(1)

input_ids = tokenizer.convert_tokens_to_ids(tokens)

# The mask has 1 for real tokens and 0 for padding tokens. Only real

# tokens are attended to.

input_mask = [1] * len(input_ids)

# Zero-pad up to the sequence length.

while len(input_ids) < max_seq_length:

input_ids.append(0)

input_mask.append(0)

segment_ids.append(0)

assert len(input_ids) == max_seq_length

assert len(input_mask) == max_seq_length

assert len(segment_ids) == max_seq_length

#print (example.label)

label_id = label_map[example.label]

if ex_index < 5:

logger.info("*** Example ***")

logger.info("guid: %s" % (example.guid))

logger.info("tokens: %s" % " ".join(

[str(x) for x in tokens]))

logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))

logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))

logger.info(

"segment_ids: %s" % " ".join([str(x) for x in segment_ids]))

logger.info("label: %s (id = %d)" % (example.label, label_id))

features.append(

InputFeatures(input_ids=input_ids,

input_mask=input_mask,

segment_ids=segment_ids,

label_id=label_id))

return features# 准确率曲线与绘图

import numpy as np

from sklearn.metrics import roc_curve, auc

import matplotlib.pyplot as plt

def vote_score(df, score, ax):

df['pred_score'] = score

df_sort = df.sort_values(by=['ID'])

# score

temp = (df_sort.groupby(['ID'])['pred_score'].agg(max) + df_sort.groupby(['ID'])['pred_score'].agg(sum) / 2) / (

1 + df_sort.groupby(['ID'])['pred_score'].agg(len) / 2)

x = df_sort.groupby(['ID'])['Label'].agg(np.min).values

df_out = pd.DataFrame({'logits': temp.values, 'ID': x})

fpr, tpr, thresholds = roc_curve(x, temp.values)

auc_score = auc(fpr, tpr)

ax.plot([0, 1], [0, 1], 'k--')

ax.plot(fpr, tpr, label='Val (area = {:.3f})'.format(auc_score))

ax.set_xlabel('False positive rate')

ax.set_ylabel('True positive rate')

ax.set_title('ROC curve')

ax.legend(loc='best')

return fpr, tpr, df_outfrom sklearn.metrics import precision_recall_curve

from funcsigs import signature

def pr_curve_plot(y, y_score, ax):

precision, recall, _ = precision_recall_curve(y, y_score)

area = auc(recall, precision)

step_kwargs = ({'step': 'post'}

if 'step' in signature(plt.fill_between).parameters

else {})

ax.step(recall, precision, color='b', alpha=0.2,

where='post')

ax.fill_between(recall, precision, alpha=0.2, color='b', **step_kwargs)

ax.set_xlabel('Recall')

ax.set_ylabel('Precision')

ax.set_ylim([0.0, 1.05])

ax.set_xlim([0.0, 1.0])

ax.set_title('Precision-Recall curve: AUC={0:0.2f}'.format(

area))def vote_pr_curve(df, score, ax):

df['pred_score'] = score

df_sort = df.sort_values(by=['ID'])

# score

temp = (df_sort.groupby(['ID'])['pred_score'].agg(max) + df_sort.groupby(['ID'])['pred_score'].agg(sum) / 2) / (

1 + df_sort.groupby(['ID'])['pred_score'].agg(len) / 2)

y = df_sort.groupby(['ID'])['Label'].agg(np.min).values

precision, recall, thres = precision_recall_curve(y, temp)

pr_thres = pd.DataFrame(data=list(zip(precision, recall, thres)), columns=['prec', 'recall', 'thres'])

pr_curve_plot(y, temp, ax)

temp = pr_thres[pr_thres.prec > 0.799999].reset_index()

rp80 = 0

if temp.size == 0:

print('Test Sample too small or RP80=0')

else:

rp80 = temp.iloc[0].recall

print(f'Recall at Precision of 80 is {rp80}')

return rp80

配置推理参数output_dir: 输出文件的目录

task_name: 任务名称

bert_model: 模型目录

data_dir: 数据目录,默认文件名称为 sample.csv

max_seq_length: 最大字符串序列长度

eval_batch_size: 推理批的大小,越大占内存越大config = {

"local_rank": -1,

"no_cuda": False,

"seed": 42,

"output_dir": './result',

"task_name": 'readmission',

"bert_model": '/home/input/MIMIC_note3519/BERT/early_readmission',

"fp16": False,

"data_dir": '/home/input/MIMIC_note3519/BERT',

"max_seq_length": 512,

"eval_batch_size": 2,

}

执行推理

推理过程会产生大量日志,可以通过选择当前 cell (选择后cell左侧会变为蓝色),按下键盘上的 “O” 键来隐藏日志输出import random

from tqdm import tqdm

from pytorch_pretrained_bert.tokenization import BertTokenizer

from modeling_readmission import BertForSequenceClassification

from torch.utils.data import TensorDataset, SequentialSampler, DataLoader

from torch.utils.data.distributed import DistributedSampler

import torch

processors = {

"readmission": ReadmissionProcessor

}

if config['local_rank'] == -1 or config['no_cuda']:

device = torch.device("cuda" if torch.cuda.is_available() and not config['no_cuda'] else "cpu")

n_gpu = torch.cuda.device_count()

else:

device = torch.device("cuda", config['local_rank'])

n_gpu = 1

# Initializes the distributed backend which will take care of sychronizing nodes/GPUs

torch.distributed.init_process_group(backend='nccl')

logger.info("device %s n_gpu %d distributed training %r", device, n_gpu, bool(config['local_rank'] != -1))

random.seed(config['seed'])

np.random.seed(config['seed'])

torch.manual_seed(config['seed'])

if n_gpu > 0:

torch.cuda.manual_seed_all(config['seed'])

if os.path.exists(config['output_dir']):

pass

else:

os.makedirs(config['output_dir'], exist_ok=True)

task_name = config['task_name'].lower()

if task_name not in processors:

raise ValueError(f"Task not found: {task_name}")

processor = processors[task_name]()

label_list = processor.get_labels()

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

# Prepare model

model = BertForSequenceClassification.from_pretrained(config['bert_model'], 1)

if config['fp16']:

model.half()

model.to(device)

if config['local_rank'] != -1:

model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[config['local_rank']],

output_device=config['local_rank'])

elif n_gpu > 1:

model = torch.nn.DataParallel(model)

eval_examples = processor.get_test_examples(config['data_dir'])

eval_features = convert_examples_to_features(

eval_examples, label_list, config['max_seq_length'], tokenizer)

logger.info("***** Running evaluation *****")

logger.info(" Num examples = %d", len(eval_examples))

logger.info(" Batch size = %d", config['eval_batch_size'])

all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)

all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)

all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)

all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long)

eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)

if config['local_rank'] == -1:

eval_sampler = SequentialSampler(eval_data)

else:

eval_sampler = DistributedSampler(eval_data)

eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=config['eval_batch_size'])

model.eval()

eval_loss, eval_accuracy = 0, 0

nb_eval_steps, nb_eval_examples = 0, 0

true_labels = []

pred_labels = []

logits_history = []

m = torch.nn.Sigmoid()

for input_ids, input_mask, segment_ids, label_ids in tqdm(eval_dataloader):

input_ids = input_ids.to(device)

input_mask = input_mask.to(device)

segment_ids = segment_ids.to(device)

label_ids = label_ids.to(device)

with torch.no_grad():

tmp_eval_loss, temp_logits = model(input_ids, segment_ids, input_mask, label_ids)

logits = model(input_ids, segment_ids, input_mask)

logits = torch.squeeze(m(logits)).detach().cpu().numpy()

label_ids = label_ids.to('cpu').numpy()

outputs = np.asarray([1 if i else 0 for i in (logits.flatten() >= 0.5)])

tmp_eval_accuracy = np.sum(outputs == label_ids)

true_labels = true_labels + label_ids.flatten().tolist()

pred_labels = pred_labels + outputs.flatten().tolist()

logits_history = logits_history + logits.flatten().tolist()

eval_loss += tmp_eval_loss.mean().item()

eval_accuracy += tmp_eval_accuracy

nb_eval_examples += input_ids.size(0)

nb_eval_steps += 1

### 绘制精度评价曲线df = pd.DataFrame({'logits': logits_history, 'pred_label': pred_labels, 'label': true_labels})

df_test = pd.read_csv(os.path.join(config['data_dir'], "sample.csv"))

fig = plt.figure(1)

ax1 = fig.add_subplot(1,2,1)

ax2 = fig.add_subplot(1,2,2)

fpr, tpr, df_out = vote_score(df_test, logits_history, ax1)

rp80 = vote_pr_curve(df_test, logits_history, ax2)

output_eval_file = os.path.join(config['output_dir'], "eval_results.txt")

plt.tight_layout()

plt.show()

将推理信息保存至输出目录eval_loss = eval_loss / nb_eval_steps

eval_accuracy = eval_accuracy / nb_eval_examples

result = {'eval_loss': eval_loss,

'eval_accuracy': eval_accuracy,

'RP80': rp80}

with open(output_eval_file, "w") as writer:

logger.info("***** Eval results *****")

for key in sorted(result.keys()):

logger.info(" %s = %s", key, str(result[key]))

writer.write("%s = %s\n" % (key, str(result[key])))

小结

通过 ICU 的医疗日记,可以知道患者的丰富的体征、病史等信息。通过这个模型可以有效预测该患者是否还会住院.

代码已提交至Github

更多内容请关注我的个人博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/503566.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

python获取坐标颜色,python – 根据一组坐标的数据着色地图

您的第一种方法称为Voronoi diagramm对于使用D3库的javascipt,这种图表有一个解决方案为了使这个解决方案完整,我在这里粘贴M.Bostock示例中的代码var w 1280,h 800;var projection d3.geo.azimuthal().mode("equidistant").origin([-98, 38]).scale(1400).transl…

最少交换次数python_leetcode第200周赛第三题leetcode1536. 排布二进制网格的最少交换次数...

leetcode1536. 排布二进制网格的最少交换次数给你一个 n x n 的二进制网格 grid&#xff0c;每一次操作中&#xff0c;你可以选择网格的 相邻两行 进行交换。一个符合要求的网格需要满足主对角线以上的格子全部都是 0 。请你返回使网格满足要求的最少操作次数&#xff0c;如果无…

php常用功能代码,10段PHP常用功能代码(1)_PHP教程

1、使用PHP Mail函数发送Email$to "viralpatel.netgmail.com"; $subject "VIRALPATEL.net"; $body "Body of your message here you can use HTML too. e.g. &#xfe64;br&#xfe65; &#xfe64;b&#xfe65; Bold &#xfe64;/b&#xfe6…

如何避免_如何避免变频器受负载冲击

电工学习网&#xff1a;www.diangon.com关注电工学习网官方微信公众号“电工电气学习”&#xff0c;收获更多经验知识。为了保障变频器的安全运行&#xff0c;避免变频器受负载冲击&#xff0c;必须做好以下几点:㈠尽量保证变频器有充足的加减速时间变频器在开机或升速时&#…

哪种语言 连接 oracle,Go语言连接Oracle(就我这个最全)

综合参考了网上挺多的方案倒腾了半天终于连接好了Go都出来这么多年了还没有个Oracle的官方驱动。。。过程真的很蛋疼。。一度想放弃直接连ODBC首先交代一下运行环境和工具版本&#xff1a;WIN10MINGW64ORACLE INSTANCCLIENT_18_3 x64Jetbrins Goland看完这篇文章&#xff0c;…

补丁程序正在运行_针对微软4月14日更新补丁会导致蓝屏问题的检测及解决方法...

近期&#xff0c;我们接连收到用户求助&#xff0c;在使用电脑过程中会突然出现蓝屏问题&#xff0c;经火绒工程师分析发现&#xff0c;大部分用户出现蓝屏问题&#xff0c;是因为安装了微软于4月14日推送的补丁所致(详见下图)。目前微软方面表示正在调查相关问题。Win10系统蓝…

服务器访问oracle数据库,Oracle数据库的访问——通过不同服务器名对数据库的访问...

服务器端完成配置后&#xff0c;现在客户端可以通过不同的网络服务名配置来访问这个数据库&#xff0c;下面是一个配置示范&#xff1a;EYGLE(DESCRIPTION (ADDRESS_LIST (ADDRESS (PROTOCOL TCP)(HOST 172.16.33.11)(PORT 1521)))(CONNECT_DATA (SERVICE_NAME eygle)))JU…

oracle质数怎么算,借花献佛之使用Oracle sql求质数(笔记)

首先声明一点&#xff0c;文章内容从itpub论坛上看到的&#xff0c;原文链接 http://www.itpub.net/thread-1849398-1-1.html&#xff0c;本文主要是记录下笔记&#xff0c;原文中有更详细的分析。使用sql求质素没什么实用价值&#xff0c;重要的是思路。(一)最简单的方法思路&…

商城html源码_Java开源商城源码推荐,从菜鸡到大神,永远绕不开的商城系统

每个Java程序员&#xff0c;从懵逼菜鸡&#xff0c;再到懵懂菜鸟&#xff0c;再到小鸟&#xff0c;大鸟&#xff0c;最后到技术大神&#xff0c;始终绕不开商城系统&#xff0c;里面蕴含了大量的业务&#xff0c;涉及到了大量的知识点和解决方案。今天锋哥介绍一款Java开源商城…

Oracle distinct后加as,【大话IT】为何加distinct之后就不走索引了

还是一样&#xff0c;16:45:44 SQL> l1 select site_id,2 count(*) sendnum3 from4 (select site_id,5 ewb_no6 from prod_send t7 where scan_time > to_date(2015-05-15, yyyy-mm-dd)8 and scan_time < to_date(2015-05-16, yyyy-mm-…

cpu只能单通道是什么表现_【小白入门】为什么要组内存双通道?

更新时间&#xff1a;2020年5月11日 内容提要&#xff1a; 1.内存双通道的原理 2.如何组双通道很多小白在购买内存的时候&#xff0c;不知道该购买一根单16G还是两根单8G&#xff0c;看完本篇文章你将知道内存双通道的优势。1.内存双通道的原理选择两根单8G组成双通道&#xff…

matlab 图像矢量量化,MATLAB环境下基于矢量量化的说话人识别系统(1)

第21卷第6期湖 北 工 业 大 学 学 报2006年12月Vol.21No.6 Journal of Hubei Univer sity of Technology Dec.2006[收稿日期]2006-10-13[作者简介]宋 敏(1979-),女,湖北武汉人,湖北工业大学硕士研究生,研究方向:计算机语音技术应用.[文章编号]1003-4684(2006)1220027203MATLAB …

雷云3灯光配置文件_雷蛇的哪种键盘最适合入手?3款最佳雷蛇键盘推荐。

更新时间2020.8.6本次主要内容是雷蛇的三款不同价位的雷蛇键盘推荐&#xff0c;有需要的小伙伴可以看一下哦&#xff0c;也许你想要入手的键盘就在其中。---------------------------------雷蛇黑寡妇蜘蛛精英版--------------------------------------黑寡妇蜘蛛精英版在猎魂光…

oracle窗帘位图索引,Greenplum数据库设计开发规范参考.docx

Greenplum数据库设计开发规范参考Greenplum数据库设计开发规范参考文档2016年7月目 录Greenplum数据库设计开发规范1V1.511 前言41.1 文档目的41.2 文档范围41.3 预期读者41.4 参考资料42 开发规范检查项43 GP与TD的差异关注点64 系统级设计74.1 用户设计74.1.1 超级用户84.1.2…

某些您可以编辑的区域交叠在一起 可能不能同时显示_DX200操作要领—修改与编辑程序(三十九)...

3.5 修改程序3.5.1 程序的调出1. 选择主菜单中的【程序内容】2. 选择【程序选择】–显示程序一览表。3. 选择要调出的程序3.5.2 程序相关画面程序相关画面有下面5种&#xff0c;可以确认/编辑每个程序的设定或登录。•程序标题画面显示和编辑注释、登录日期、编辑禁止的状态等。…

求二叉树中以x为根的子树的深度_还在玩耍的你,该总结啦!(本周小结之二叉树)...

给「代码随想录」一个星标吧&#xff01;❝有学习就要有总结❞本周小结本周赶上了十一国庆&#xff0c;估计大家已经对本周末没什么概念了&#xff0c;但是我们该做总结还是要做总结的。本周的主题其实是「简单但并不简单」&#xff0c;本周所选的题目大多是看一下就会的题目&a…

oracle (+)的可读性,Oracle基础笔记一

1.基本 SELECT 语句1.基本 SELECT 语句SELECT 标识 选择哪些列。FROM 标识从哪个表中选择。注意&#xff1a;SQL 语言大小写不敏感。SQL 可以写在一行或者多行关键字不能被缩写也不能分行各子句一般要分行写。使用缩进提高语句的可读性。2.算术运算符( - * /)数字和日期使…

钉钉功能介绍_平棉集团组织召开阿里钉钉办公系统基础功能培训会

4月11日上午&#xff0c;平棉集团在总部26楼多媒体会议室组织召开阿里钉钉办公系统基础功能培训会&#xff0c;邀请河南一一信息技术公司经理杨杉前来授课。集团公司董事长张先顺及公司领导陈亚民、王仲山、王向阳、陶尚林&#xff0c;各生产经营单位主管销售工作的负责人和公司…

qemu搭建arm运行linux内核,centos使用qemu搭建ARM64运行环境

准备工作(1) linux 内核源码&#xff0c; 从github上获取git clone https://github.com/torvalds/linuxmake kernelversion(2) 交叉编译工具&#xff0c;从linaro官网(www.linaro.org)上获取解压后设置环境变量即可xz -d gcc-linaro-xxx.tar.xztar -xvf gcc-linaro-xxx.tarexpo…

java 某年某月中第几周 开始时间和结束时间_重磅!库里又要签下一超级大合同!4年2亿啊!退役时间也定了...

好家伙&#xff01;现在的超级巨星都喜欢提前续约了&#xff0c;继詹姆斯与湖人签下两年8500万美元顶薪协议后&#xff0c;库里也有望达成这一成就。当地时间周一训练结束后&#xff0c;他接受采访谈到自己的续约问题&#xff0c;表示已经和球队交流过&#xff0c;同时明确表态…