00:41:59 | * | q66 quit (Quit: Leaving) |
02:50:19 | * | Associat0r joined #nimrod |
02:50:19 | * | Associat0r quit (Changing host) |
02:50:19 | * | Associat0r joined #nimrod |
04:16:01 | * | OrionPK quit (Quit: Leaving) |
04:38:03 | * | Endeg joined #nimrod |
04:46:53 | * | EXetoC joined #nimrod |
07:02:16 | * | Araq_ joined #nimrod |
07:35:23 | * | Araq_ quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212]) |
07:57:43 | * | zahary quit (Read error: Operation timed out) |
08:57:44 | * | Associat0r quit (Quit: Associat0r) |
09:00:15 | * | Araq_ joined #nimrod |
09:09:32 | * | Araq_ quit (Remote host closed the connection) |
09:26:12 | * | q66 joined #nimrod |
09:50:07 | zahary_ | Araq, FYI: http://nwcpp.org/talks/2009/Ownership_Systems_against_Data_Races.pdf |
09:50:47 | zahary_ | from here: http://bartoszmilewski.com/2009/09/22/ownership-systems-against-data-races/ |
10:16:27 | * | Araq_ joined #nimrod |
10:30:20 | * | BitPuffin joined #nimrod |
10:35:10 | * | BitPuffin quit (Read error: Connection reset by peer) |
10:39:52 | * | Associat0r joined #nimrod |
10:44:40 | * | zahary__ joined #nimrod |
10:47:26 | * | zahary_ quit (Ping timeout: 240 seconds) |
10:53:02 | * | BitPuffin joined #nimrod |
11:09:02 | * | Trix[a]r_za quit (Ping timeout: 240 seconds) |
11:09:17 | * | Trix[a]r_za joined #nimrod |
11:55:43 | * | EXetoC quit (Quit: WeeChat 0.4.1) |
12:12:12 | * | EXetoC joined #nimrod |
13:57:42 | * | jbe_ joined #nimrod |
13:59:44 | * | Araq_ quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212]) |
14:01:50 | * | Endeg quit (Read error: Connection reset by peer) |
14:10:22 | * | BitPuffin_ joined #nimrod |
14:10:45 | * | reactormonk quit (Ping timeout: 240 seconds) |
14:10:46 | * | BitPuffin quit (Remote host closed the connection) |
14:11:01 | * | reactormonk joined #nimrod |
14:12:46 | jbe_ | does someone know why i get a segfault when i use the queue implementation and call add on a field, like: add(obj.queue, item)? |
14:13:08 | dom96 | did you initialise the queue? |
14:13:15 | jbe_ | yup |
14:13:21 | jbe_ | sample: https://gist.github.com/jbe/6207385 |
14:16:15 | dom96 | hrm, looks like a bug |
14:16:47 | jbe_ | i thought so too |
14:17:08 | jbe_ | should i add it to the tracker? |
14:18:18 | dom96 | sure |
14:23:05 | dom96 | it's odd, if you then do 'w.names = initQueue[string]()' it works |
14:31:23 | jbe_ | yeah, i added an issue |
14:31:35 | dom96 | thanks |
14:31:57 | jbe_ | nimrod is completely awesome by the way |
14:32:22 | dom96 | nice to hear you say that :) |
14:32:30 | dom96 | spread the word if you can :D |
14:33:27 | jbe_ | am already doing.. and i will make a little shrine for araq on my balcony or something |
14:33:35 | dom96 | hehe |
14:53:43 | * | shevy quit (Quit: "") |
17:44:09 | * | Mat2 joined #nimrod |
17:44:13 | Mat2 | hello |
17:45:50 | Araq | hi Mat2 |
17:50:16 | Araq | Mat2: you can avoid a jump table by doing 'goto opcode*256'; you need to ensure every handler doesn't span more than 256 bytes then and is aligned properly |
17:50:45 | Araq | ever tried this approach? it's impossible to do in C I think |
17:52:03 | Araq | this way you can keep the opcode in a few bits of the instruction and don't waste a full word for full threading |
17:54:32 | Mat2 | yes, I have done that before for a small VM written in assembler |
17:55:25 | Mat2 | with some assembler hacking, it's also possible in C (but of course not in a platform-independent way) |
17:56:39 | Araq | was it worth it? |
17:56:47 | Mat2 | for IA32 no |
17:58:07 | Araq | did you compare against a jump table or against full threading? |
17:58:27 | Mat2 | I compared it against direct threading |
18:00:39 | Mat2 | it is only effective by precalculating the branch target in the handler routines and the BTB get out of sync the same way as for DTC (ca. 50% mispredictions) |
18:02:35 | Mat2 | because the only way on IA32 is either selfmodifying code (with desastrous results because of the out-of-order pipelined design) or branching though indirect register |
18:03:10 | Mat2 | e.g. jmp <reg> |
18:03:21 | Araq | yeah but the idea is to minimize cache pressure |
18:03:52 | Araq | avoiding the jump table so that the cache can hold other things |
18:04:41 | Mat2 | ok, BTB mispredictions costs much more clock cycles (I have counted more than 100 in some measures, mean 25-30) |
18:07:20 | Mat2 | that depends on the number of pipeline stages |
18:10:34 | Mat2 | and caches are rather large (for Intel cpu's) |
18:19:10 | Mat2 | anyhow, I do not know what kind of large memory allocations requires cache-preloading beside the VM |
18:21:57 | * | BitPuffin_ quit (Ping timeout: 248 seconds) |
18:46:45 | * | BitPuffin joined #nimrod |
18:48:37 | * | Associat0r quit (Quit: Associat0r) |
19:19:24 | * | XAMPP-8 joined #nimrod |
19:25:31 | * | XAMPP_8 joined #nimrod |
19:28:45 | * | XAMPP-8 quit (Ping timeout: 276 seconds) |
19:36:38 | * | XAMPP_8 quit (Ping timeout: 240 seconds) |
19:46:57 | * | BitPuffin quit (Ping timeout: 276 seconds) |
19:52:10 | * | Mat2 quit (Quit: Verlassend) |
19:55:11 | * | Mat2 joined #nimrod |
21:00:26 | dom96 | So I just submitted this: https://github.com/logicchains/Levgen-Parallel-Benchmarks/pull/2 |
21:00:47 | dom96 | Hopefully soon we will see a blog post which shows Nimrod coming out on top in terms of speed and sexiness :D |
21:02:21 | Araq | sure ... |
21:02:31 | Araq | we've seen lots of these already |
21:33:53 | EXetoC | :> |
21:47:36 | * | EXetoC quit (Quit: WeeChat 0.4.1) |
21:54:13 | Mat2 | get some sleep, ciao |
21:54:23 | * | Mat2 quit (Quit: Verlassend) |
21:57:52 | * | BitPuffin joined #nimrod |
22:05:46 | * | OrionPK joined #nimrod |
22:07:25 | * | OrionPK quit (Read error: Connection reset by peer) |
22:07:51 | * | OrionPK joined #nimrod |
22:12:03 | reactormonk | dom96, happy? |
22:17:13 | dom96 | reactormonk: hrm? |
22:20:57 | * | XAMPP-8 joined #nimrod |
22:30:42 | reactormonk | Looks like I need to recreate the PR |
22:36:38 | * | XAMPP-8 quit (Ping timeout: 240 seconds) |
22:50:10 | dom96 | indeed |
23:51:02 | * | BitPuffin quit (Ping timeout: 240 seconds) |