You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(15) |
Aug
|
Sep
(72) |
Oct
(34) |
Nov
(10) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
|
Feb
(22) |
Mar
(9) |
Apr
(11) |
May
(18) |
Jun
(68) |
Jul
(10) |
Aug
(4) |
Sep
(13) |
Oct
(29) |
Nov
(21) |
Dec
(24) |
2007 |
Jan
(32) |
Feb
(19) |
Mar
(11) |
Apr
(14) |
May
(8) |
Jun
(7) |
Jul
(3) |
Aug
|
Sep
|
Oct
(8) |
Nov
(26) |
Dec
(16) |
2008 |
Jan
(1) |
Feb
(4) |
Mar
(4) |
Apr
(25) |
May
(23) |
Jun
(22) |
Jul
(18) |
Aug
(61) |
Sep
(129) |
Oct
(106) |
Nov
(99) |
Dec
(24) |
2009 |
Jan
(6) |
Feb
(2) |
Mar
(29) |
Apr
(84) |
May
(106) |
Jun
(70) |
Jul
(56) |
Aug
(42) |
Sep
(62) |
Oct
(140) |
Nov
(38) |
Dec
(9) |
2010 |
Jan
(19) |
Feb
(15) |
Mar
(32) |
Apr
(36) |
May
(28) |
Jun
(17) |
Jul
(12) |
Aug
(13) |
Sep
(7) |
Oct
(9) |
Nov
(156) |
Dec
(56) |
2011 |
Jan
(53) |
Feb
(25) |
Mar
(6) |
Apr
|
May
(1) |
Jun
(22) |
Jul
(8) |
Aug
(20) |
Sep
(50) |
Oct
(60) |
Nov
(44) |
Dec
(3) |
2012 |
Jan
(2) |
Feb
(11) |
Mar
(32) |
Apr
(35) |
May
(13) |
Jun
(90) |
Jul
(15) |
Aug
(27) |
Sep
(15) |
Oct
(28) |
Nov
|
Dec
|
2013 |
Jan
|
Feb
(119) |
Mar
(91) |
Apr
(68) |
May
(29) |
Jun
(24) |
Jul
(4) |
Aug
(14) |
Sep
(3) |
Oct
(11) |
Nov
(31) |
Dec
(36) |
2014 |
Jan
(48) |
Feb
(1) |
Mar
(23) |
Apr
(14) |
May
(15) |
Jun
(4) |
Jul
(8) |
Aug
(18) |
Sep
|
Oct
(14) |
Nov
|
Dec
(5) |
2015 |
Jan
(2) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(44) |
Jun
(14) |
Jul
(7) |
Aug
(2) |
Sep
(5) |
Oct
(23) |
Nov
(27) |
Dec
(7) |
2016 |
Jan
(15) |
Feb
(22) |
Mar
(23) |
Apr
(41) |
May
(25) |
Jun
(1) |
Jul
(27) |
Aug
(9) |
Sep
(5) |
Oct
|
Nov
(27) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
(3) |
Apr
(2) |
May
(1) |
Jun
(18) |
Jul
(16) |
Aug
(11) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2018 |
Jan
(11) |
Feb
(2) |
Mar
(3) |
Apr
|
May
(13) |
Jun
(12) |
Jul
(16) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
(3) |
Mar
(21) |
Apr
(8) |
May
(12) |
Jun
|
Jul
|
Aug
(4) |
Sep
(4) |
Oct
(2) |
Nov
(5) |
Dec
(16) |
2020 |
Jan
|
Feb
|
Mar
(1) |
Apr
(2) |
May
(16) |
Jun
|
Jul
(10) |
Aug
(24) |
Sep
(31) |
Oct
(17) |
Nov
(4) |
Dec
|
2021 |
Jan
(3) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
(12) |
Dec
(10) |
2022 |
Jan
|
Feb
(3) |
Mar
(2) |
Apr
(15) |
May
(4) |
Jun
|
Jul
|
Aug
(15) |
Sep
|
Oct
|
Nov
|
Dec
|
2023 |
Jan
|
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
(1) |
Apr
(6) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(3) |
Sep
|
Oct
|
Nov
|
Dec
(1) |
2025 |
Jan
(1) |
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
(3) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
(1) |
27
|
28
(3) |
29
|
30
|
|
|
|
|
|
From: Tom R. <ro...@gm...> - 2014-06-28 05:43:41
|
Here are some notes I wrote on lockstep hashlife about four years ago. ---------- Forwarded message ---------- From: Tom Rokicki <ro...@gm...> Date: Wed, Mar 10, 2010 at 11:18 AM Subject: Re: More thoughts on Parallel Hashlife To: Tony Finch <do...@do...> Wow, awesome. I had heard about that but good to see the paper! This is so fun. Okay, let's talk parallel hashlife. I call the idea below "lockstep hashlife". I've seen your thoughts, and I've been thinking about it myself, but have never been satisfied. My earlier concerns about concurrent mutation of a hashtable have been more or less negated by some reading I've been doing on parallel hashtables, including the Cliff Click work, and the work by Doug Lea on the Java concurrent hashtable implementation. But let's simplify things even further. The idea I think that will enable us to actually fly, and get real speedups with this fine-grain parallelism, is something I call lock-step hashlife. Instead of jumping all over on the time axis, let's do hashlife one time step at a time (however big that timestep should be). So instead of gen(node) = gen(node->nw), gen(node->sw), gen(...) which does node-at-a-time generation, let's do gen(vector<node*> nodes) = <base case here> vector<node*> nextlev = find-all-subnodes-needed-to-eval *nodes if (nextlev.size() > 0) gen(nextlev) vector<node*> nextlev2 = find-all-subnodes-again if (nextlev2.size() > 0) gen(nextlev2) cons-and-set-results(nodes) so essentially, we always work with a big parallel vector of nodes. We "excavate" the nodes in strict time order---the vector<node*> nodes at any level are always from the same time instant. (We might not instantiate the actual vector; we might just mark the nodes in the hashtable in some magic way and then walk the whole hash.) We need to allocate temporary storage for each node in nodes, to hold the intermediate subnodes that are searched and consed. That and the vectors may cause us to exhaust memory at inopportune times. But I think we can figure out a way to deal with this. One nice thing about this is it does permit us to *visualize* what is going on in a nice clean way---we never jump *backwards* in time like we do in "normal" hashlife. Instead, we are always doing a *step*. The step may or may not recurse into deeper steps, but as each call completes, the time counter moves inexorably forward. And we can compute and record perhaps useful statistics on how many nodes we needed to recompute, and the like. Once we've got this working sequentially, I think it can be parallelized without too much effort; essentially the work on the "vectors" is split up among threads. A slightly fuller outline follows. Thoughts? -tom void enq(vector<node*> nextlev, node *n) { if (n->state == 0) { n->state = PENDING ; nextlev.push_back(n) ; } } gen(vector<node*> nodes, int lev) { if (lev == basecase) { for (int i=0; i<nodes.size(); i++) nodes[i]->leafcase() ; return ; } vector<node*> nextlev ; for (int i=0; i<nodes.size(); i++) { node *n = nodes[i] ; node *work = getnode() ; n->res = work ; enq(nextlev, n->nw) ; enq(nextlev, work->nw = find_node(n->nw->ne, n->ne->nw, n->nw->se, n->ne->sw)) ; enq(nextlev, n->ne) ; enq(nextlev, work->ne = find_node(n->nw->sw, n->nw->se, n->sw->nw, n->sw->ne)) ; enq(nextlev, work->res = find_node(n->nw->se, n->ne->sw, n->sw->ne, n->se->nw)) ; enq(nextlev, work->sw = find_node(n->ne->sw, n->ne->se, n->se->nw, n->se->ne)) ; enq(nextlev, n->sw) ; enq(nextlev, work->se = find_node(n->sw->ne, n->se->nw, n->sw->se, n->se->sw)) ; enq(nextlev, n->se) ; } if (nextlev.size() > 0) gen(nextlev, lev-1) ; nextlev.clear() ; node *t1, *t2, *t3, *t4 ; for (int i=0; i<nodes.size(); i++) { node *n = nodes[i] ; node *work = n->res ; enq(nextlev, t1 = find_node(n->nw->res, work->nw->res, work->ne->res, work->res->res) ; enq(nextlev, t2 = find_node(work->nw->res, n->ne->res, work->res->res, work->sw->res) ; enq(nextlev, t3 = find_node(work->ne->res, work->res->res, n->sw->res, work->se->res) ; enq(nextlev, t4 = find_node(work->res->res, work->sw->res, work->se->res, n->se->res) ; work->nw = t1 ; work->ne = t2 ; work->sw = t3 ; work->se = t4 ; } if (nextlev.size() > 0) gen(nextlev, lev-1) ; for (int i=0; i<nodes.size(); i++) { node *n = nodes[i] ; node *work = n->res ; n->res = find_node(work->nw->res, work->ne->res, work->sw->res, work->se->res) ; freenode(work) ; n->state = DONE ; } } |
From: Tom R. <ro...@gm...> - 2014-06-28 05:41:17
|
On multicore hashlife---it's something that's been thought about pretty extensively, but I've never written any actual code. The lockfree hashtables sound promising indeed, but they are pretty complicated, and it's not clear how good performance would actually be. Also, it's not just the hashtable that needs locking, but also the concept of a work item---you don't want multiple threads expanding the same node at the same time, but if another thread is expanding a node, you may need to wait for its result. I've written about this pretty extensively and I think I've outlined a reasonable solution (called lock-step hashlife) but, as I said, have not actually implemented anything yet. It may suffer from an Amdahl-like effect where certain sequential actions dominate runtime. My implementation of hashlife in golly does delete nodes, but only by stopping the world and doing a mark/sweep garbage collection. I've actually experimented with both generational and incremental garbage collection, but never saw enough of a speedup to commit it to the main codeline. If you multithread the generation but do not multithread the mark/sweep GC, you'll find GC dominating pretty quickly, so that's another issue. But this is a fascinating topic; hashlife as an algorithm is pretty simple, and you'd expect there'd be a reasonable way to multithread it. But no one has actually implemented anything yet. I would probably not start with golly, for this, but instead start with the older hlife I wrote that is still available on my site, since it is much simpler, without a lot of the complexity that a UI imposes. If that could be made to work, retrofitting that onto Golly would be straightforward. For multicore, I think the most straightforward approach is to take away some of Golly's stinginess on memory (using bits in address fields to mark certain things etc.) to simplify things. But it's a fascinating topic and I'm sure someone will come up with a reasonable solution at some point. -tom On Fri, Jun 27, 2014 at 9:26 PM, Andrew Trevorrow <an...@tr...> wrote: > Paul: > >> I think Hashlife never deletes entries. ... > > Tom would be able to give you a more accurate answer, but my understanding > is that Tom's implementation of Hashlife does delete entries. At least, > that's what I assume it's doing when garbage collecting. > >> Feature request: could the Help|About... say whether this is the 32- or >> 64-bit version? > > Good idea, but showing it in the About box could be tricky, as all > we do at the moment is display the static info in Help/about.html. > Would it be ok to display the info in the status line message > at start up? Eg: "This is Golly version 2.7 (64-bit). ..." > > If you'd rather see the info at any time after starting up then the > About box would be the best place. We could either: > > 1. Remove about.html and generate the html data dynamically. > 2. Load about-32bit.html or about-64bit.html. > > I probably prefer option 1, as we could then show other potentially > useful info such as the wxWidgets version used to build Golly. > > Any other suggestions? > > Andrew > > ------------------------------------------------------------------------------ > Open source business process management suite built on Java and Eclipse > Turn processes into business applications with Bonita BPM Community Edition > Quickly connect people, data, and systems into organized workflows > Winner of BOSSIE, CODIE, OW2 and Gartner awards > http://p.sf.net/sfu/Bonitasoft > _______________________________________________ > Golly-test mailing list > Gol...@li... > https://lists.sourceforge.net/lists/listinfo/golly-test -- -- http://cube20.org/ -- http://golly.sf.net/ -- |
From: Andrew T. <an...@tr...> - 2014-06-28 04:58:33
|
Paul: > I think Hashlife never deletes entries. ... Tom would be able to give you a more accurate answer, but my understanding is that Tom's implementation of Hashlife does delete entries. At least, that's what I assume it's doing when garbage collecting. > Feature request: could the Help|About... say whether this is the 32- or > 64-bit version? Good idea, but showing it in the About box could be tricky, as all we do at the moment is display the static info in Help/about.html. Would it be ok to display the info in the status line message at start up? Eg: "This is Golly version 2.7 (64-bit). ..." If you'd rather see the info at any time after starting up then the About box would be the best place. We could either: 1. Remove about.html and generate the html data dynamically. 2. Load about-32bit.html or about-64bit.html. I probably prefer option 1, as we could then show other potentially useful info such as the wxWidgets version used to build Golly. Any other suggestions? Andrew |
From: Paul C. <pa...@ig...> - 2014-06-26 01:59:12
|
Hi folks. I recently got a new PC (after 10 years with the old one). Four cores. (But sadly only 8GB for now, so no use for 64-bit Golly yet) Golly uses a maximum 25% of the available CPU. Of course. Started thinking about multithreaded Golly, especially Hashlife, and searched for <multithreaded hash table>, and found this: http://preshing.com/20130605/the-worlds-simplest-lock-free-hash-table/ I think Hashlife never deletes entries. If that’s so, this is perhaps a start. Feature request: could the Help|About... say whether this is the 32- or 64-bit version? Cheers, Paul |