Keras Bert实践

0. bert模型下载

bert预训练模型及权重,可以通过 github google发表的bert仓库下载:
在这里插入图片描述
但是,由于数据在谷歌云上面,下载可能很慢或者连接超时。可以尝试亚马逊上面的:

1    PRETRAINED_MODEL_ARCHIVE_MAP = {
2        'bert-base-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz",
3        'bert-large-uncased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased.tar.gz",
4        'bert-base-cased': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased.tar.gz",
5        'bert-base-multilingual': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual.tar.gz",
6        'bert-base-chinese': "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese.tar.gz",
7    }

1. keras-bert包

安装

pip install keras-bert

示例:
训练模型:

import keras
from keras_bert import get_base_dict, get_model, compile_model, gen_batch_inputs


# A toy input example
sentence_pairs = [
    [['all', 'work', 'and', 'no', 'play'], ['makes', 'jack', 'a', 'dull', 'boy']],
    [['from', 'the', 'day', 'forth'], ['my', 'arm', 'changed']],
    [['and', 'a', 'voice', 'echoed'], ['power', 'give', 'me', 'more', 'power']],
]


# Build token dictionary
token_dict = get_base_dict()  # A dict that contains some special tokens
for pairs in sentence_pairs:
    for token in pairs[0] + pairs[1]:
        if token not in token_dict:
            token_dict[token] = len(token_dict)
token_list = list(token_dict.keys())  # Used for selecting a random word


# Build & train the model
model = get_model(
    token_num=len(token_dict),
    head_num=5,
    transformer_num=12,
    embed_dim=25,
    feed_forward_dim=100,
    seq_len=20,
    pos_num=20,
    dropout_rate=0.05,
)
compile_model(model)
model.summary()

def _generator():
    while True:
        yield gen_batch_inputs(
            sentence_pairs,
            token_dict,
            token_list,
            seq_len=20,
            mask_rate=0.3,
            swap_sentence_rate=1.0,
        )

model.fit_generator(
    generator=_generator(),
    steps_per_epoch=1000,
    epochs=2,
    validation_data=_generator(),
    validation_steps=100,
    callbacks=[
        keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)
    ],
)


# Use the trained model
inputs, output_layer = get_model(
    token_num=len(token_dict),
    head_num=5,
    transformer_num=12,
    embed_dim=25,
    feed_forward_dim=100,
    seq_len=20,
    pos_num=20,
    dropout_rate=0.05,
    training=False,      # The input layers and output layer will be returned if `training` is `False`
    trainable=False,     # Whether the model is trainable. The default value is the same with `training`
    output_layer_num=4,  # The number of layers whose outputs will be concatenated as a single output.
                         # Only available when `training` is `False`.
)

运行结果:

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
Input-Token (InputLayer)        (None, 20)           0                                            
__________________________________________________________________________________________________
Input-Segment (InputLayer)      (None, 20)           0                                            
__________________________________________________________________________________________________
Embedding-Token (TokenEmbedding [(None, 20, 25), (28 700         Input-Token[0][0]                
__________________________________________________________________________________________________
Embedding-Segment (Embedding)   (None, 20, 25)       50          Input-Segment[0][0]              
__________________________________________________________________________________________________
......
MLM-Norm (LayerNormalization)   (None, 20, 25)       50          MLM-Dense[0][0]                  
__________________________________________________________________________________________________
Extract (Extract)               (None, 25)           0           Encoder-12-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
MLM-Sim (EmbeddingSimilarity)   (None, 20, 28)       28          MLM-Norm[0][0]                   
                                                                 Embedding-Token[0][1]            
__________________________________________________________________________________________________
Input-Masked (InputLayer)       (None, 20)           0                                            
__________________________________________________________________________________________________
NSP-Dense (Dense)               (None, 25)           650         Extract[0][0]                    
__________________________________________________________________________________________________
MLM (Masked)                    (None, 20, 28)       0           MLM-Sim[0][0]                    
                                                                 Input-Masked[0][0]               
__________________________________________________________________________________________________
NSP (Dense)                     (None, 2)            52          NSP-Dense[0][0]                  
==================================================================================================
Total params: 96,630
Trainable params: 96,630

2. bert4keras包

这是苏剑林大神针对keras-bert包做的重写优化。

安装

pip install bert4keras

使用示例:

from keras.layers import *

from bert4keras.backend import keras, set_gelu
from bert4keras.bert import build_bert_model
from bert4keras.optimizers import Adam
from bert4keras.snippets import sequence_padding, DataGenerator
from bert4keras.tokenizer import Tokenizer
import pandas as pd
import numpy as np

set_gelu('tanh')  # 切换gelu版本

maxlen = 32
batch_size = 16
config_path = 'electra_tiny/bert_config_tiny.json'
checkpoint_path = 'electra_tiny/model.ckpt-1000000'
dict_path = 'electra_tiny/vocab.txt'


def load_data(filename):
    D = pd.read_csv(filename).values.tolist()
    return D


# 加载数据集
all_data = load_data('训练数据名称.csv')
random_order = range(len(all_data))
np.random.shuffle(list(random_order))
train_data = [all_data[j] for i, j in enumerate(random_order) if i % 6 != 1 and i%6!=2]
valid_data = [all_data[j] for i, j in enumerate(random_order) if i % 6 == 1]
test_data = [all_data[j] for i, j in enumerate(random_order) if i % 6 == 2]
# 建立分词器
tokenizer = Tokenizer(dict_path, do_lower_case=True)


class data_generator(DataGenerator):
    """数据生成器
    """

    def __iter__(self, random=False):
        idxs = list(range(len(self.data)))
        if random:
            np.random.shuffle(idxs)
        batch_token_ids, batch_segment_ids, batch_labels = [], [], []
        for i in idxs:
            text1, text2, label = self.data[i]
#             print(text1, text2, label)
            token_ids, segment_ids = tokenizer.encode(text1, text2, max_length=maxlen)
            batch_token_ids.append(token_ids)
            batch_segment_ids.append(segment_ids)
            batch_labels.append([label])
            if len(batch_token_ids) == self.batch_size or i == idxs[-1]:
                batch_token_ids = sequence_padding(batch_token_ids)
                batch_segment_ids = sequence_padding(batch_segment_ids)
                batch_labels = sequence_padding(batch_labels)
                yield [batch_token_ids, batch_segment_ids], batch_labels
                batch_token_ids, batch_segment_ids, batch_labels = [], [], []


# 加载预训练模型
bert = build_transformer_model(
    config_path=config_path,
    checkpoint_path=checkpoint_path,
    model='electra',
        return_keras_model=False,

)  # 建立模型,加载权重

output = Lambda(lambda x: x[:, 0],
                name='CLS-token')(bert.model.output)
output = Dense(units=2,
               activation='softmax',
               kernel_initializer=bert.initializer)(output)

model = keras.models.Model(bert.model.input, output)
model.summary()

model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=Adam(2e-4),  # 用足够小的学习率
    # optimizer=PiecewiseLinearLearningRate(Adam(5e-5), {10000: 1, 30000: 0.1}),
    metrics=['accuracy'],
)

# 转换数据集
train_generator = data_generator(train_data, batch_size)
valid_generator = data_generator(valid_data, batch_size)
test_generator = data_generator(test_data, batch_size)


def evaluate(data):
    total, right = 0., 0.
    for x_true, y_true in data:
        y_pred = model.predict(x_true).argmax(axis=1)
        y_true = y_true[:, 0]
        total += len(y_true)
        right += (y_true == y_pred).sum()
    return right / total


class Evaluator(keras.callbacks.Callback):
    def __init__(self):
        self.best_val_acc = 0.

    def on_epoch_end(self, epoch, logs=None):
        val_acc = evaluate(valid_generator)
        if val_acc > self.best_val_acc:
            self.best_val_acc = val_acc
            model.save_weights('best_model.weights')
        test_acc = evaluate(test_generator)
        print(u'val_acc: %.5f, best_val_acc: %.5f, test_acc: %.5f\n'
              % (val_acc, self.best_val_acc, test_acc))


evaluator = Evaluator()
model.fit_generator(train_generator.forfit(),
                    steps_per_epoch=len(train_generator),
                    epochs=20,
                    callbacks=[evaluator])

model.load_weights('best_model.weights')
print(u'final test acc: %05f\n' % (evaluate(test_generator)))

参考:

  1. github google-research bert;
  2. github keras-bert;
  3. BERT-Pytorch demo初探;
  4. baseline ELECTRA bert4keras “公益AI之星”挑战赛-新冠疫情相似句对判定大赛 北京闫广庆;
  5. bert 代码解读
已标记关键词 清除标记
课程简介: 历经半个多月的时间,Debug亲自撸的 “企业员工角色权限管理平台” 终于完成了。正如字面意思,本课程讲解的是一个真正意义上的、企业级的项目实战,主要介绍了企业级应用系统中后端应用权限的管理,其中主要涵盖了六大核心业务模块、十几张数据库表。 其中的核心业务模块主要包括用户模块、部门模块、岗位模块、角色模块、菜单模块和系统日志模块;与此同时,Debug还亲自撸了额外的附属模块,包括字典管理模块、商品分类模块以及考勤管理模块等等,主要是为了更好地巩固相应的技术栈以及企业应用系统业务模块的开发流程! 核心技术栈列表: 值得介绍的是,本课程在技术栈层面涵盖了前端和后端的大部分常用技术,包括Spring Boot、Spring MVC、Mybatis、Mybatis-Plus、Shiro(身份认证与资源授权跟会话等等)、Spring AOP、防止XSS攻击、防止SQL注入攻击、过滤器Filter、验证码Kaptcha、热部署插件Devtools、POI、Vue、LayUI、ElementUI、JQuery、HTML、Bootstrap、Freemarker、一键打包部署运行工具Wagon等等,如下图所示: 课程内容与收益: 总的来说,本课程是一门具有很强实践性质的“项目实战”课程,即“企业应用员工角色权限管理平台”,主要介绍了当前企业级应用系统中员工、部门、岗位、角色、权限、菜单以及其他实体模块的管理;其中,还重点讲解了如何基于Shiro的资源授权实现员工-角色-操作权限、员工-角色-数据权限的管理;在课程的最后,还介绍了如何实现一键打包上传部署运行项目等等。如下图所示为本权限管理平台的数据库设计图: 以下为项目整体的运行效果截图: 值得一提的是,在本课程中,Debug也向各位小伙伴介绍了如何在企业级应用系统业务模块的开发中,前端到后端再到数据库,最后再到服务器的上线部署运行等流程,如下图所示:
©️2020 CSDN 皮肤主题: 酷酷鲨 设计师:CSDN官方博客 返回首页