在上一篇文章中,我们完成了业务数据的同步,在推荐系统中另一个必不可少的数据就是用户行为数据,可以说用户行为数据是推荐系统的基石,巧妇难为无米之炊,所以接下来,我们就要将用户的行为数据同步到推荐系统数据库中。

在文章推荐系统中,用户行为包括曝光、点击、停留、收藏、分享等,所以这里我们定义的用户行为数据的字段包括:发生时间(actionTime)、停留时间(readTime)、频道 ID(channelId)、事件名称(action)、用户 ID(userId)、文章 ID(articleId)以及算法 ID(algorithmCombine),这里采用 json 格式,如下所示

1
2
3
4
5
6
7
8
9
10
# 曝光的参数
{"actionTime":"2019-04-10 18:15:35","readTime":"","channelId":0,"param":{"action": "exposure", "userId": "2", "articleId": "[18577, 14299]", "algorithmCombine": "C2"}}

# 对文章触发行为的参数
{"actionTime":"2019-04-10 18:15:36","readTime":"","channelId":18,"param":{"action": "click", "userId": "2", "articleId": "18577", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:38","readTime":"1621","channelId":18,"param":{"action": "read", "userId": "2", "articleId": "18577", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:39","readTime":"","channelId":18,"param":{"action": "click", "userId": "1", "articleId": "14299", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:39","readTime":"","channelId":18,"param":{"action": "click", "userId": "2", "articleId": "14299", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:41","readTime":"914","channelId":18,"param":{"action": "read", "userId": "2", "articleId": "14299", "algorithmCombine": "C2"}}
{"actionTime":"2019-04-10 18:15:47","readTime":"7256","channelId":18,"param":{"action": "read", "userId": "1", "articleId": "14299", "algorithmCombine": "C2"}}

用户离线行为数据

由于用户行为数据规模庞大,通常是每天更新一次,以供离线计算使用。首先,在 Hive 中创建用户行为数据库 profile 及用户行为表 user_action,设置按照日期进行分区,并匹配 json 格式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- 创建用户行为数据库
create database if not exists profile comment "use action" location '/user/hive/warehouse/profile.db/';
-- 创建用户行为信息表
create table user_action
(
actionTime STRING comment "user actions time",
readTime STRING comment "user reading time",
channelId INT comment "article channel id",
param map comment "action parameter"
)
COMMENT "user primitive action"
PARTITIONED BY (dt STRING) # 按照日期分区
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' # 匹配json格式
LOCATION '/user/hive/warehouse/profile.db/user_action';

通常用户行为数据被保存在应用服务器的日志文件中,我们可以利用 Flume 监听应用服务器上的日志文件,将用户行为数据收集到 Hive 的 user_action 表对应的 HDFS 目录中,Flume 配置如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
a1.sources = s1
a1.sinks = k1
a1.channels = c1

a1.sources.s1.channels= c1
a1.sources.s1.type = exec
a1.sources.s1.command = tail -F /root/logs/userClick.log
a1.sources.s1.interceptors=i1 i2
a1.sources.s1.interceptors.i1.type=regex_filter
a1.sources.s1.interceptors.i1.regex=\\{.*\\}
a1.sources.s1.interceptors.i2.type=timestamp

# c1
a1.channels.c1.type=memory
a1.channels.c1.capacity=30000
a1.channels.c1.transactionCapacity=1000

# k1
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.path=hdfs://192.168.19.137:9000/user/hive/warehouse/profile.db/user_action/%Y-%m-%d
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=Text
a1.sinks.k1.hdfs.rollInterval=0
a1.sinks.k1.hdfs.rollSize=10240
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.idleTimeout=60

编写 Flume 启动脚本 collect_click.sh

1
2
3
4
5
6
7
#!/usr/bin/env bash

export JAVA_HOME=/root/bigdata/jdk
export HADOOP_HOME=/root/bigdata/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin

/root/bigdata/flume/bin/flume-ng agent -c /root/bigdata/flume/conf -f /root/bigdata/flume/conf/collect_click.conf -Dflume.root.logger=INFO,console -name a1

Flume 自动生成目录后,需要手动关联 Hive 分区后才能加载到数据

1
alter table user_action add partition (dt='2019-11-11') location "/user/hive/warehouse/profile.db/user_action/2011-11-11/"

用户实时行为数据

为了提高推荐的实时性,我们也需要收集用户的实时行为数据,以供在线计算使用。这里利用 Flume 将日志收集到 Kafka,在线计算任务可以从 Kafka 读取用户实时行为数据。首先,开启 zookeeper,以守护进程运行

1
/root/bigdata/kafka/bin/zookeeper-server-start.sh -daemon /root/bigdata/kafka/config/zookeeper.properties

开启 Kafka

1
2
3
4
5
6
/root/bigdata/kafka/bin/kafka-server-start.sh /root/bigdata/kafka/config/server.properties

# 开启消息生产者
/root/bigdata/kafka/bin/kafka-console-producer.sh --broker-list 192.168.19.19092 --sync --topic click-trace
# 开启消费者
/root/bigdata/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.19.137:9092 --topic click-trace

修改 Flume 的日志收集配置文件,添加 c2 和 k2 ,将日志数据收集到 Kafka

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
a1.sources = s1
a1.sinks = k1 k2
a1.channels = c1 c2

a1.sources.s1.channels= c1 c2
a1.sources.s1.type = exec
a1.sources.s1.command = tail -F /root/logs/userClick.log
a1.sources.s1.interceptors=i1 i2
a1.sources.s1.interceptors.i1.type=regex_filter
a1.sources.s1.interceptors.i1.regex=\\{.*\\}
a1.sources.s1.interceptors.i2.type=timestamp

# c1
a1.channels.c1.type=memory
a1.channels.c1.capacity=30000
a1.channels.c1.transactionCapacity=1000

# c2
a1.channels.c2.type=memory
a1.channels.c2.capacity=30000
a1.channels.c2.transactionCapacity=1000

# k1
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.path=hdfs://192.168.19.137:9000/user/hive/warehouse/profile.db/user_action/%Y-%m-%d
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=Text
a1.sinks.k1.hdfs.rollInterval=0
a1.sinks.k1.hdfs.rollSize=10240
a1.sinks.k1.hdfs.rollCount=0
a1.sinks.k1.hdfs.idleTimeout=60

# k2
a1.sinks.k2.channel=c2
a1.sinks.k2.type=org.apache.flume.supervisorctl
我们可以利用supervisorctl来管理supervisor。sink.kafka.KafkaSink
a1.sinks.k2.kafka.bootstrap.servers=192.168.19.137:9092
a1.sinks.k2.kafka.topic=click-trace
a1.sinks.k2.kafka.batchSize=20
a1.sinks.k2.kafka.producer.requiredAcks=1

编写 Kafka 启动脚本 start_kafka.sh

1
2
3
4
5
6
7
#!/usr/bin/env bash
# 启动zookeeper
/root/bigdata/kafka/bin/zookeeper-server-start.sh -daemon /root/bigdata/kafka/config/zookeeper.properties
# 启动kafka
/root/bigdata/kafka/bin/kafka-server-start.sh /root/bigdata/kafka/config/server.properties
# 增加topic
/root/bigdata/kafka/bin/kafka-topics.sh --zookeeper 192.168.19.137:2181 --create --replication-factor 1 --topic click-trace --partitions 1

进程管理

我们这里使用 Supervisor 进行进程管理,当进程异常时可以自动重启,Flume 进程配置如下

1
2
3
4
5
6
7
8
9
10
[program:collect-click]
command=/bin/bash /root/toutiao_project/scripts/collect_click.sh
user=root
autorestart=true
redirect_stderr=true
stdout_logfile=/root/logs/collect.log
loglevel=info
stopsignal=KILL
stopasgroup=true
killasgroup=true

Kafka 进程配置如下

1
2
3
4
5
6
7
8
9
10
[program:kafka]
command=/bin/bash /root/toutiao_project/scripts/start_kafka.sh
user=root
autorestart=true
redirect_stderr=true
stdout_logfile=/root/logs/kafka.log
loglevel=info
stopsignal=KILL
stopasgroup=true
killasgroup=true

启动 Supervisor

1
supervisord -c /etc/supervisord.conf

启动 Kafka 消费者,并在应用服务器日志文件中写入日志数据,Kafka 消费者即可收集到实时行为数据

1
2
3
4
5
6
7
8
# 启动Kafka消费者
/root/bigdata/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.19.137:9092 --topic click-trace

# 写入日志数据
echo {\"actionTime\":\"2019-04-10 21:04:39\",\"readTime\":\"\",\"channelId\":18,\"param\":{\"action\": \"click\", \"userId\": \"2\", \"articleId\": \"14299\", \"algorithmCombine\": \"C2\"}} >> userClick.log

# 消费者接收到日志数据
{"actionTime":"2019-04-10 21:04:39","readTime":"","channelId":18,"param":{"action": "click", "userId": "2", "articleId": "14299", "algorithmCombine": "C2"}}

Supervisor 常用命令如下

1
2
3
4
5
6
7
8
supervisorctl

> status # 查看程序状态
> start apscheduler # 启动apscheduler单一程序
> stop toutiao:* # 关闭toutiao组程序
> start toutiao:* # 启动toutiao组程序
> restart toutiao:* # 重启toutiao组程序
> update # 重启配置文件修改过的程序

原文出自(已授权):https://www.jianshu.com/u/ac833cc5146e

参考


【技术服务】,详情点击查看: https://mp.weixin.qq.com/s/PtX9ukKRBmazAWARprGIAg


扫一扫 关注微信公众号!号主 专注于搜索和推荐系统,尝试使用算法去更好的服务于用户,包括但不局限于机器学习,深度学习,强化学习,自然语言理解,知识图谱,还不定时分享技术,资料,思考等文章!