阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

大数据计算:如何仅用1.5KB内存为十亿对象计数

406次阅读
没有评论

共计 9506 个字符,预计需要花费 24 分钟才能阅读完成。

Big Data Counting: How To Count A Billion Distinct Objects Using Only 1.5K

This is a guest post by Matt Abrams (@abramsm), from Clearspring, discussing how they are able to accurately estimate the cardinality of sets with billions of distinct elements using surprisingly small data structures. Their servers receive well over 100 billion events per month.

Clearspring,我们从事统计数据。统计一组不同元素且数量很大的数据集时,是一个挑战。

为了更好地理解已经明确基数的大数据集的挑战,我们假设你的日志文件包含 16 个字符的 ID, 并且你想统计不同 ID 的数量. 例如:

4f67bfc603106cb2

这 16 个字符需要用 128 位来表示。6 万 5 千个 ID 将需要 1MB 的空间。我们每天收到 30 多亿条事件记录,每条记录都有一个 ID。这些 ID 需要 3840 亿位或 45GB 的存储。而这仅仅是 ID 字段需要的空间。我们采取一种简单的方法获取日常事件记录中以 ID 为基数的数据。最简单的办法就是使用哈希集合且存放到内存中,其中哈希集包含唯一 ID 的列表(即输入文件中可能会有多条记录的 id 是相同,但在哈希集中只存放一条)。即使我们假设只有 1 / 3 的条记录 ID 是唯一的(即 2 / 3 的记录 ID 是重复的),哈希集仍需要 119GB 的 RAM,这其中不包括 Java 需要在内存中存储对象的开销。你需要一台配备几百 GB 内存的机器来计算不同的元素,并且这只是计算一天内日志事件记录的唯一 ID 的内存消耗。如果我们想要统计数周或数月的数据,这问题只会变得更加困难。我们身边当然不会有一台配备几百 GB 内存的空闲机器,所以我们需要一个更好的解决方案。

解决这一问题的常见办法是使用 位图 (博客: 海量数据处理算法—Bit-Map)。位图可以快速、准确地获取一个给定输入的基数。位图的基本思想是使用哈希函数把数据集映射到一个 bit 位,每个输入元素与 bit 位是一一对应。这样 Hash 将没有产生碰撞冲突,并减少需要计算每个元素映射到 1 个 bit 的空间。虽然 Bit-map 大大节省了存储空间,但当统计很高的基数或非常大的不同的数据集,它们仍然有问题。例如,如果我们想要使用 Bit-map 计数十亿,你将需要 Bit-map 位,或需要每个约 120 MB 的计数器。稀疏的位图可以被压缩,以获得更多的空间效率,但也并不总是有帮助的。

幸运的是,基数估计 是一个热门的 研究 领域。我们已经利用这项研究提供了一个开源实现的基数估计、集合元素检测和 top- k 算法。

基数估计算法就是使用准确性换取空间。为了说明这一点,我们用三种不同的计算方法统计所有莎士比亚作品中不同单词的数量。请注意,我们的输入数据集增加了额外的数据以致比问题的参考基数更高。这三种技术是:Java HashSet、Linear Probabilistic Counter 以及一个 Hyper LogLog Counter。结果如下:

                            大数据计算:如何仅用 1.5KB 内存为十亿对象计数

该表显示,我们统计这些单词只用了 512 bytes,而误差在 3% 以内。相比之下,HashMap 的计数准确度最高,但需要近 10MB 的空间,你可以很容易地看到为什么基数估计是有用的。在实际应用中准确性并不是很重要的,这是事实,在大多数网络规模和网络计算的情况下,用概率计数器会节省巨大的空间。

线性概率计数器

线性概率计数器是高效的使用空间,并且允许实现者指定所需的精度水平。该算法在注重空间效率时是很有用的,但你需要能够控制结果的误差。该算法分两步运行:第一步,首先在内存中分配一个初始化为都为 0 的 Bit-map,然后使用哈希函数对输入数据中的每个条目进行 hash 计算,哈希函数运算的结果是将每条记录(或者是元素)映射到 Bit-map 的一个 Bit 位上,该 Bit 位被置为 1;第二步,算法计算空的 bit 位数量,并使用这个数输入到下面的公式来进行估算:

n=-m ln Vn

注意:ln Vn=Loge(Vn) 自然对数

在公式中,m是 Bit-map 的大小,Vn 是空 bit 位和 map 的大小的比率。需要重点注意的是原始 Bit-map 的大小,可以远小于预期的最大基数。到底小多少取决于你可以承受误差的大小。因为 Bit-map 的大小 m 小于不同元素的总数将会产生碰撞。虽然碰撞可以节省空间,但同时也造成了估算结果出现误差。所以通过控制原始 map 的大小,我们可以估算碰撞的次数,以致我们将在最终结果中看到误差有多大。

Hyper LogLog

顾名思义,Hyper LogLog 计数器就是估算 Nmax 为基数的数据集仅需使用 loglog(Nmax)+O(1) bits 就可以。如线性计数器的 Hyper LogLog 计数器允许设计人员指定所需的精度值,在 Hyper LogLog 的情况下,这是通过定义所需的相对标准差和预期要计数的最大基数。大部分计数器通过一个输入数据流 M,并应用一个哈希函数设置h(M) 来工作。这将产生一个 S = h(M) of {0,1}^∞字符串的可观测结果。通过分割哈希输入流成 m 个子字符串,并对每个子输入流保持 m 的值可观测,这就是相当一个新 Hyper LogLog(一个子 m 就是一个新的 Hyper LogLog)。利用额外的观测值的平均值,产生一个计数器,其精度随着 m 的增长而提高,这只需要对输入集合中的每个元素执行几步操作就可以完成。其结果是,这个计数器可以仅使用 1.5 kb 的空间计算精度为 2% 的十亿个不同的数据元素。与执行 HashSet 所需的 120 兆字节进行比较,这种算法的效率很明显。

合并分布式计数器

我们已经证明了使用上面描述的计数器我们可以估算大集合的基数。但是,如果你的原始输入数据集不适合于单台机器,将怎么做呢?这正是我们在 Clearspring 所面临的问题。我们的数据分散在数百台服务器上,并且每个服务器只包含整个数据集子集的一部分。这事实上我们能合并一组分布式计数器的内容是至关重要的。这个想法有点令人费解,但如果你花费一些时间去思考这个问题,就会发现其与基本的基数估计值相比并没有太大的不同。因为这个计数器表示映射中的位作为基数,我们可以采取两个兼容计数器并将他们 bit 位合并到单一的 map 上。这个算法已经处理碰撞,所以我们可以得到一个基数估计所需的精密,即使我们从来没有把所有的输入数据到一台机器。这是非常有用的,节省了我们在网络中移动数据的大量时间和精力。

Next Steps

希望这篇文章能帮助你更好地理解这个概念和概率计数器的应用。如果估算大集合的基数是一个问题,而你又碰巧使用一个基于 JVM 的语言,那么你应该使用 stream-lib 项目——它提供了其他几个流处理工具以及上文所述的算法的实现。

本文来自:High Scalability

本人结束语:自认英语不好,翻译可能会有出入。但 我看了 csdn 的翻译,我怀疑那是技术员人翻译的吗?一些翻译直接是工具翻译过来。

若深入了解Hyper LogLog:请看http://algo.inria.fr/flajolet/Publications/FlFuGaMe07.pdf 和 http://www.ic.unicamp.br/~celio/peer2peer/math/bitmap-algorithms/durand03loglog.pdf

At Clearspring we like to count things. Counting the number of distinct elements (the cardinality) of a set is challenge when the cardinality of the set is large.

To better understand the challenge of determining the cardinality of large sets let’s imagine that you have a 16 character ID and you’d like to count the number of distinct IDs that you’ve seen in your logs. Here is an example:

4f67bfc603106cb2

These 16 characters represent 128 bits. 65K IDs would require 1 megabyte of space. We receive over 3 billion events per day, and each event has an ID. Those IDs require 384,000,000,000 bits or 45 gigabytes of storage. And that is just the space that the ID field requires! To get the cardinality of IDs in our daily events we could take a simplistic approach. The most straightforward idea is to use an in memory hash set that contains the unique list of IDs seen in the input files. Even if we assume that only 1 in 3 records are unique the hash set would still take 119 gigs of RAM, not including the overhead Java requires to store objects in memory. You would need a machine with several hundred gigs of memory to count distinct elements this way and that is only to count a single day’s worth of unique IDs. The problem only gets more difficult if we want to count weeks or months of data. We certainly don’t have a single machine with several hundred gigs of free memory sitting around so we needed a better solution.

One common approach to this problem is the use of bitmaps. Bitmaps can be used to quickly and accurately get the cardinality of a given input. The basic idea with a bitmap is mapping the input dataset to a bit field using a hash function where each input element uniquely maps to one of the bits in the field. This produces zero collisions, and reduces the space required to count each unique element to 1 bit. While bitmaps drastically reduce the space requirements from the naive set implementation described above they are still problematic when the cardinality is very high and/or you have a very large number of different sets to count. For example, if we want to count to one billion using a bitmap you will need one billion bits, or roughly 120 megabytes for each counter. Sparse bitmaps can be compressed in order to gain space efficiency, but that is not always helpful.

Luckily, cardinality estimation is a popular area of research. We’ve leveraged this research to provide a open source implementation of cardinality estimators, set membership detection, and top-k algorithms.

Cardinality estimation algorithms trade space for accuracy. To illustrate this point we counted the number of distinct words in all of Shakespeare’s works using three different counting techniques. Note that our input dataset has extra data in it so the cardinality is higher than the standard reference answer to this question. The three techniques we used were Java HashSet, Linear Probabilistic Counter, and a Hyper LogLog Counter. Here are the results:

Counter

Bytes Used

Count

Error

HashSet

10447016

67801

0%

Linear

3384

67080

1%

HyperLogLog

512

70002

3%

The table shows that we can count the words with a 3% error rate using only 512 bytes of space. Compare that to a perfect count using a HashMap that requires nearly 10 megabytes of space and you can easily see why cardinality estimators are useful. In applications where accuracy is not paramount, which is true for most web scale and network counting scenarios, using a probabilistic counter can result in tremendous space savings.

Linear Probabilistic Counter

The Linear Probabilistic Counter is space efficient and allows the implementer to specify the desired level of accuracy. This algorithm is useful when space efficiency is important but you need to be able to control the error in your results. This algorithm works in a two-step process. The first step assigns a bitmap in memory initialized to all zeros. A hash function is then applied to the each entry in the input data. The result of the hash function maps the entry to a bit in the bitmap, and that bit is set to 1. The second step the algorithm counts the number of empty bits and uses that number as input to the following equation to get the estimate.

n=-m ln Vn

In the equation m is the size of the bitmap and Vn is the ratio of empty bits over the size of the map. The important thing to note is that the size of the original bitmap can be much smaller than the expected max cardinality. How much smaller depends on how much error you can tolerate in the result. Because the size of the bitmap, m, is smaller than the total number of distinct elements, there will be collisions. These collisions are required to be space-efficient but also result in the error found in the estimation. So by controlling the size of the original map we can estimate the number of collisions and therefore the amount of error we will see in the end result.

Hyper LogLog

The Hyper LogLog Counter’s name is self-descriptive. The name comes from the fact that you can estimate the cardinality of a set with cardinality Nmax using just loglog(Nmax) + O(1) bits. Like the Linear Counter the Hyper LogLog counter allows the designer to specify the desired accuracy tolerances. In Hyper LogLog’s case this is done by defining the desired relative standard deviation and the max cardinality you expect to count. Most counters work by taking an input data stream, M, and applying a hash function to that set, h(M). This yields an observable result of S = h(M) of {0,1}^∞ strings. Hyper LogLog extends this concept by splitting the hashed input stream into m substrings and then maintains m observables for each of the substreams. Taking the average of the additional observables yields a counter whose accuracy improves as m grows in size but only requires a constant number of operations to be performed on each element of the input set. The result is that, according to the authors of thispaper, this counter can count one billion distinct items with an accuracy of 2% using only 1.5 kilobytes of space. Compare that to the 120 megabytes required by the HashSet implementation and the efficiency of this algorithm becomes obvious.

Merging Distributed Counters

We’ve shown that using the counters described above we can estimate the cardinality of large sets. However, what can you do if your raw input dataset does not fit on single machine? This is exactly the problem we face at Clearspring. Our data is spread out over hundreds of servers and each server contains only a partial subset of the the total dataset. This is where the fact that we can merge the contents of a set of distributed counters is crucial. The idea is a little mind-bending but if you take a moment to think about it the concept is not that much different than basic cardinality estimation. Because the counters represent the cardinality as set of bits in a map we can take two compatible counters and merge their bits into a single map. The algorithms already handle collisions so we can still get a cardinality estimation with the desired precision even though we never brought all of the input data to a single machine. This is terribly useful and saves us a lot of time and effort moving data around our network.

Next Steps

Hopefully this post has helped you better understand the concept and application of probabilistic counters. If estimating the cardinality of large sets is a problem and you happen to use a JVM based language then you should check out the stream-lib project — it provides implementations of the algorithms described above as well as several other stream-processing utilities.

转自:http://blog.csdn.net/hguisu/article/details/8433731

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-12-03发表,共计9506字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中