东莞市小程序定制开发丨网站建设,团队网站模板,建设银行手机银行下载官方网站下载,原创作文网站simbert文本相似语义召回#xff1b;保存及在线服务https://blog.csdn.net/weixin_42357472/article/details/116205077
SimBERT#xff08;基于UniLM思想、融检索与生成于一体的BERT模型#xff09;【主要应用场景#xff1a;相似文本生成、相似文本检索】 https://blog.…simbert文本相似语义召回保存及在线服务https://blog.csdn.net/weixin_42357472/article/details/116205077
SimBERT基于UniLM思想、融检索与生成于一体的BERT模型【主要应用场景相似文本生成、相似文本检索】 https://blog.csdn.net/u013250861/article/details/123649047
import numpy as np
import os
from collections import Counter
os.environ[TF_KERAS] 1
from bert4keras.backend import keras, K
from bert4keras.models import build_transformer_model
from bert4keras.tokenizers import Tokenizer
from bert4keras.snippets import sequence_padding
from bert4keras.snippets import uniout
from keras.models import Modelmaxlen 32# bert配置
# bert配置
config_path rD:***t\chinese_simbert_L-6_H-384_A-12\bert_config.json
checkpoint_path rD:\*****rt\chinese_simbert_L-6_H-384_A-12\bert_model.ckpt
dict_path rD:\****rt\chinese_simbert_L-6_H-384_A-12\vocab.txt# 建立分词器
tokenizer Tokenizer(dict_path, do_lower_caseTrue) # 建立分词器# 建立加载模型
bert build_transformer_model(config_path,checkpoint_path,with_poollinear,applicationunilm,return_keras_modelFalse,
)encoder keras.models.Model(bert.model.inputs, bert.model.outputs[0])import pandas as pd
datas1 pd.read_csv(rD:****raw_datas150.csv)
datas_all list(datas1[title])# 测试相似度效果
data datas_all
a_token_ids, b_token_ids, labels [], [], []
texts []for d in data:token_ids tokenizer.encode(d, maxlenmaxlen)[0]a_token_ids.append(token_ids)
# token_ids tokenizer.encode(d[1], maxlenmaxlen)[0]
# b_token_ids.append(token_ids)
# labels.append(d[2])texts.append(d)a_token_ids sequence_padding(a_token_ids)
# b_token_ids sequence_padding(b_token_ids)
a_vecs encoder.predict([a_token_ids, np.zeros_like(a_token_ids)],verboseTrue)
# b_vecs encoder.predict([b_token_ids, np.zeros_like(b_token_ids)],
# verboseTrue)
# labels np.array(labels)a_vecs a_vecs / (a_vecs**2).sum(axis1, keepdimsTrue)**0.5print(type(a_vecs))
np.save(sim_all_datas.npy,a_vecs)#import numpy as np
#a_vecsss np.load(rD:\tcl\simbert\sim_all_datas.npy)def most_similar(text, topn10):检索最相近的topn个句子token_ids, segment_ids tokenizer.encode(text, max_lengthmaxlen)print(token_ids, segment_ids )vec encoder.predict([[token_ids], [segment_ids]])[0]vec / (vec**2).sum()**0.5sims np.dot(a_vecsss, vec)return [(i, datas_all[i], sims[i]) for i in sims.argsort()[::-1][:topn]]kk[海绵宝宝]
mmm []
for i in kk:results most_similar(i, 10)mmm.append([i,results])print(i,results)
from paddlenlp import Taskflow
similarity Taskflow(text_similarity)
[2022-03-22 15:17:18,306] [ INFO] - Downloading model_state.pdparams from [https://bj.bcebos.com/paddlenlp/taskflow/text_similarity/simbert-base-chinese/model_state.pdparams](https://bj.bcebos.com/paddlenlp/taskflow/text_similarity/simbert-base-chinese/model_state.pdparams)
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 615M/615M [00:2900:00, 22.1MB/s]
[2022-03-22 15:17:51,977] [ INFO] - Downloading model_config.json from [https://bj.bcebos.com/paddlenlp/taskflow/text_similarity/simbert-base-chinese/model_config.json](https://bj.bcebos.com/paddlenlp/taskflow/text_similarity/simbert-base-chinese/model_config.json)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 334/334 [00:0000:00, 197kB/s]
[2022-03-22 15:17:52,154] [ INFO] - Downloading https://bj.bcebos.com/paddlenlp/models/transformers/simbert/vocab.txt and saved to /root/.paddlenlp/models/simbert-base-chinese
[2022-03-22 15:17:52,154] [ INFO] - Downloading vocab.txt from [https://bj.bcebos.com/paddlenlp/models/transformers/simbert/vocab.txt](https://bj.bcebos.com/paddlenlp/models/transformers/simbert/vocab.txt)
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 63.4k/63.4k [00:0000:00, 744kB/s]
[2022-03-22 15:18:10,818] [ INFO] - Weights from pretrained model not used in BertModel: [cls.predictions.decoder_bias, cls.predictions.transform.weight, cls.predictions.transform.bias, cls.predictions.transform.LayerNorm.weight, cls.predictions.transform.LayerNorm.bias, cls.predictions.decoder_weight, cls.predictions.decoder.bias, cls.seq_relationship.weight, cls.seq_relationship.bias]
[2022-03-22 15:18:12,113] [ INFO] - Converting to the inference model cost a little time.
[2022-03-22 15:18:30,093] [ INFO] - The inference model save in the path:/root/.paddlenlp/taskflow/text_similarity/simbert-base-chinese/static/inference