淘先锋技术网

首页 1 2 3 4 5 6 7

由于线上查询大于1s,需要对于该次查询进行优化;为了加快查询的效率,我们在基础表上建立了一个物化视图

CREATE MATERIALIZED VIEW dwst.tt (

`sort_key` UInt8,

`id` UInt64,

`type` UInt8,

`is_profit_avg` UInt8,

`bd1` UInt64,

`bd2` UInt64,

) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{ck_cluster}/dwst/tt', '{replica}') PARTITION BY sn_sort_key ORDER BY (id, type, bd1, bd2) SETTINGS index_granularity = 8192 AS

SELECT halfMD5(id) % 64 AS sn_sort_key, id, type,

multiIf(((sum(v1) - sum(v2)) < 0, 2, 1) AS is_profit_avg, bd1, bd2 FROM dwst.base_detail WHERE date <=(today() - 10) GROUP BY sort_key,id,type,bd1,

bd2

为了安全性的原因,去掉了一些细节;大概的意思就是在base_detail基础表的基础上聚合了id,type,bd1,bd2,截止在t-10 是否盈利的情况,因为基础表的数据量比较大,想利用物化视图,提前预计算数据,减少查询sql的是时间;

我们在实践的过程中,发现了两个问题,也对于clickhouse物化视图有个更深的一些理解

问题一: 每次视图中的数据总数不一致

本地通过insert....remote的语句,模拟主表的插入,就是触发物化视图的计算功能;但是当基础信息表数据更新之后,发现每次聚合的数据条目总数都是不一致

Data blocks are deduplicated. For multiple writes of the same data block (data blocks of the same size containing the same rows in the same order), the block is only written once.

The reason for this is in case of network failures when the client application doesn’t know if the data was written to the DB, so the INSERT query can simply be repeated.

It doesn’t matter which replica INSERTs were sent to with identical data.

INSERTs are idempotent. Deduplication parameters are controlled by merge_tree server settings.

大概的意思 clickhouse insert 语句是幂等的,在对于同一个data block进行写操作的时候,由于网络原因,客户端应用不确认数据已经被写入了,所以就出现了重复插入的问题

好吧,建议采用以下的解决方案:

* 使用子查询对于重复的数据进行二次加工,进行去重(官方推荐)

* 使用ReplicatedReplacingMergeTree 执行引擎进行数据的去重,这是我在实践中想采用的,每次使用插入完数据之后,通过optimize table 的方式,对数据进行去重;

问题二: 每次盈利的行数不一致

对于数据进行去重之后,我发现数据的总数是准确的,但是每次的is_profit_avg的总数却不是一致的,这使我有点恼火了;后面通过查找官方的文档

A materialized view is implemented as follows: when inserting data to the table specified in `SELECT`,

part of the inserted data is converted by this `SELECT` query, and the result is inserted in the view.

Important

Materialized views in ClickHouse are implemented more like insert triggers. If there’s some aggregation in the view query, it’s applied only to the batch of freshly inserted data.

Any changes to existing data of source table (like update, delete, drop partition, etc.) doesn’t change the materialized view.

简单翻译一下就是:物化视图本质就像insert语句的触发器;如果有什么集合的运算,他会应用于最新插入的数据当中;对于其他原表的变化,比如说,更新,删除,删除分区,都不会影响到物化视图的变化

A `SELECT` query can contain `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Note that the corresponding conversions are performed independently on each block of inserted data.

For example, if `GROUP BY` is set, data is aggregated during insertion, but only within a single packet of inserted data.

The data won’t be further aggregated. The exception is when using an `ENGINE` that independently performs data aggregation, such as `SummingMergeTree`.

查询语句可以包含distinct,group by ,order by ,limit,特别注意这些相关联的约束只能应用于每个新插入的数据块中;比如说,如果设置了group by ,这些语句只会应用于新插入的的数据当中,不会作用于已经插入的分区当中;

总结

实践中的例子,group by维度之后,得到的盈利值 是对于历史数据整体求差值;必须对于历史中的每条数据都要进行运算;这不太符合material view中的实践场景,本质上,物化视图是对于流数据的处理,单条数据就是一个值,通过这个值进行累加,而不是对于离线数据的整体处理所得到的值;所以对于本次查询的优化,就放弃了使用物化视图的方式;直接用中间表,每天计算一次;

参考链接