00:00:39 | dom96 | I did. |
00:00:44 | dom96 | That's how I discovered Nimrod :P |
00:01:37 | Jehan_ | How? |
00:01:49 | dom96 | Back then there was a wiki article about it |
00:01:59 | dom96 | The Python article linked to it |
00:03:01 | Jehan_ | Well, I'd still say that this is the exception rather than the rule. |
00:04:32 | dom96 | perhaps |
00:05:41 | Jehan_ | The real difficulty with gaining mindshare for a programming language is, I think that (1) there are too many of them and (2) people are being taught the mainstream ones in school/college and settle for "good enough". |
00:07:03 | Jehan_ | An additional problem is that languages with small communities have difficulties building an ecosystem. Chicken and egg problem, really. |
00:10:11 | * | Jehan_ quit (Quit: Leaving) |
00:18:56 | * | ehaliewicz joined #nimrod |
00:21:31 | shevy | I don't even remember how I discovered nimrod but I think fowl mentioned it |
00:22:35 | dom96 | night |
00:27:58 | fowl | hi shevy |
00:30:23 | * | gsingh93_ quit (Quit: Bye) |
00:40:13 | * | shevy quit (Remote host closed the connection) |
00:48:38 | * | ehaliewicz quit (Ping timeout: 245 seconds) |
00:57:49 | Demos | we are slowly gaining mindshare, but we need to release and be patiant |
01:06:39 | * | xenagi joined #nimrod |
01:08:39 | * | hoverbear joined #nimrod |
01:10:20 | * | hoverbear quit (Client Quit) |
01:13:16 | OrionPK | hola |
01:20:48 | * | q66 quit (Quit: Leaving) |
01:24:03 | * | flaviu1 quit (Remote host closed the connection) |
01:29:18 | EXetoC | 0xff'i8 is allowed. has it been reported? |
01:29:40 | fowl | why shouldnt it be |
01:34:09 | EXetoC | because 0xff > 0x7f, so it doesn't make much sense |
01:35:47 | * | hoverbear joined #nimrod |
01:36:00 | EXetoC | 0x7f being the upper range of signed bytes of course |
01:40:46 | Varriount | Eh, I've gone to wikipedia to discover programming languages... |
01:41:00 | hoverbear | Varriount? |
01:41:12 | Varriount | Yes? |
01:41:14 | EXetoC | I thought the checks were pretty precise at least for decimal literals, but I guess I'm wrong |
01:41:28 | EXetoC | assuming that it didn't break at some point |
01:42:34 | * | CARAM_ quit (Ping timeout: 240 seconds) |
01:42:52 | Varriount | hoverbear: Yes? |
01:43:09 | hoverbear | Varriount: Why are you looking? |
01:43:40 | Varriount | hoverbear: I'm not *currently* looking. I'm just responding to past comments (read the logs) |
01:43:47 | hoverbear | Varriount: Oh I see |
01:44:30 | Varriount | hoverbear: I'm quite satisfied with Nimrod, and have no plans to go elsewhere. |
01:44:33 | * | CARAM_ joined #nimrod |
01:44:42 | hoverbear | Varriount: A nice happy place. :) |
01:44:51 | Varriount | Hello CARAM_ |
01:48:00 | * | saml_ joined #nimrod |
02:08:41 | * | nequitans quit (Ping timeout: 264 seconds) |
02:09:51 | * | nequitans joined #nimrod |
03:08:18 | * | Springbok joined #nimrod |
03:15:53 | * | def- joined #nimrod |
03:18:46 | * | Joe_knock quit (Quit: Leaving) |
03:19:23 | * | def-_ quit (Ping timeout: 252 seconds) |
03:20:19 | CARAM_ | heya ;) |
03:57:41 | * | brson quit (Ping timeout: 255 seconds) |
04:12:19 | * | kemet joined #nimrod |
04:23:25 | * | bjz joined #nimrod |
04:25:56 | * | kemet quit (Quit: Instantbird 1.5 -- http://www.instantbird.com) |
04:32:40 | * | saml_ quit (Ping timeout: 260 seconds) |
04:36:48 | * | xenagi quit (Quit: Leaving) |
04:40:34 | * | Demos quit (Read error: Connection reset by peer) |
04:44:53 | * | brson joined #nimrod |
05:46:03 | * | hoverbea_ joined #nimrod |
05:47:15 | * | hoverbear quit (Ping timeout: 252 seconds) |
06:06:19 | * | CARAM_ quit (Ping timeout: 252 seconds) |
06:08:17 | * | CARAM_ joined #nimrod |
06:12:39 | * | bjz_ joined #nimrod |
06:14:31 | * | hoverbea_ quit () |
06:14:33 | * | bjz quit (Ping timeout: 240 seconds) |
06:14:43 | * | hoverbear joined #nimrod |
06:16:01 | * | hoverbear quit (Client Quit) |
06:26:46 | * | bjz_ quit (Read error: Connection reset by peer) |
06:29:37 | * | bjz joined #nimrod |
06:29:38 | * | brson quit (Ping timeout: 240 seconds) |
06:44:29 | * | lyro quit (Remote host closed the connection) |
07:02:39 | * | bjz quit (Read error: Connection reset by peer) |
07:04:03 | * | dymk quit (Ping timeout: 245 seconds) |
07:04:38 | * | bjz joined #nimrod |
07:05:00 | * | dymk joined #nimrod |
07:17:24 | * | bjz quit (Ping timeout: 260 seconds) |
07:17:34 | * | bjz joined #nimrod |
07:41:39 | * | io2 joined #nimrod |
08:11:19 | * | Changaco joined #nimrod |
08:11:38 | * | kemet joined #nimrod |
08:19:11 | * | kunev joined #nimrod |
09:53:02 | * | kemet quit (Remote host closed the connection) |
10:10:29 | * | Changaco quit (Ping timeout: 264 seconds) |
10:10:46 | * | Changaco joined #nimrod |
10:41:02 | * | kemet joined #nimrod |
10:50:05 | * | Changaco quit (Ping timeout: 264 seconds) |
10:57:06 | * | kemet quit (Quit: Instantbird 1.5 -- http://www.instantbird.com) |
11:21:00 | * | Changaco joined #nimrod |
12:07:06 | reactormonk | is a memmove more efficient than moving every byte manually? |
12:16:39 | * | untitaker quit (Ping timeout: 252 seconds) |
12:17:28 | reactormonk | no contains(string, char) in system? :-/ |
12:22:04 | * | untitaker joined #nimrod |
12:23:02 | reactormonk | is Error: undeclared identifier: '[]' for strings defined somewhere in system I assume? :-/ |
12:31:46 | * | Klaufir left #nimrod (#nimrod) |
12:36:04 | * | darkf quit (Quit: Leaving) |
12:58:19 | * | Springbok quit (Quit: Leaving) |
13:09:17 | * | Klaufir joined #nimrod |
13:21:59 | * | mal`` quit (Quit: ERC Version 5.3 (IRC client for Emacs)) |
13:26:11 | * | mal`` joined #nimrod |
14:08:15 | * | flaviu1 joined #nimrod |
15:03:53 | * | Changaco quit (Ping timeout: 264 seconds) |
15:05:02 | * | Changaco joined #nimrod |
15:30:21 | Klaufir | Given this: https://gist.github.com/klaufir/b2a8d13bf1f6f5296dc6 , how would it be possible to chain these, as in it.imap(...).ifilter(...) ? |
15:34:17 | flaviu1 | Klaufir: I don't see why not. Is there anything preventing that from working? |
15:34:31 | Klaufir | compiler error |
15:36:25 | EXetoC | flaviu1: simply because they don't yield iterators |
15:37:31 | Klaufir | EXetoC: how would it be possible to change this code to enable chaining? |
15:37:40 | EXetoC | I've been wanting to implement D-like ranges, but I'm waiting for some bug fixes |
15:41:17 | EXetoC | Klaufir: I'm not sure, but I don't think it's such a bad idea to have to convert an iterator to some container type first |
15:41:50 | flaviu1 | EXetoC: That's slow though |
15:42:00 | flaviu1 | especially if your iterator is very long |
15:42:02 | Klaufir | EXetoC: it makes these functions unusable when there is a large datastream |
15:42:02 | EXetoC | ok range type then |
15:42:33 | Klaufir | I find the syntax regarding iterators rather cumbersome |
15:42:51 | Klaufir | for example there is the need for var iterInst = iter |
15:43:04 | Klaufir | and when I see something like "var A = B" |
15:43:11 | Klaufir | I expect them to have the same type |
15:43:26 | Klaufir | but no, here it means instantiation |
15:46:00 | dom96 | you probably shouldn't be doing this using iterators but simply procs |
15:46:07 | dom96 | which generate a seq |
15:46:08 | * | kunev quit (Quit: leaving) |
15:46:17 | EXetoC | see the above remark |
15:46:33 | Klaufir | dom96: suppose we have a large datastreams, then seqs will eat up memory |
15:46:55 | dom96 | then you should implement lazy sequences |
15:47:24 | dom96 | Closure iterators simply weren't designed for this. |
15:47:27 | flaviu1 | dom96: Iterators also help reduce overhead by combining multiple operations on one |
15:49:47 | flaviu1 | And if closure iterators aren't designed for this, they should be since this is the biggest use-case for them |
15:50:43 | EXetoC | is it better than having some range interface? |
15:50:54 | Klaufir | range interface would be great |
15:51:59 | dom96 | flaviu1: IMO lazy sequences are more appropriate. |
15:52:21 | EXetoC | that's a very specific type though, but maybe |
15:52:49 | Klaufir | dom96: is there a lazyseq implementation somewhere? |
15:53:02 | dom96 | Klaufir: Not as far as I know |
16:04:47 | Araq | Klaufir: they have the same type |
16:06:23 | Araq | btw usually 'map' is spelt 'for' and 'filter' is 'if' in Nimrod ... |
16:07:26 | Araq | yes, Nimrod is not Haskell, get over it |
16:07:29 | Klaufir | Araq: possibly I don't understand something clearly, when passing a closure iterator to imap ( https://gist.github.com/klaufir/b2a8d13bf1f6f5296dc6 ), without the var iterInst = iter it doesn't seem work. Why is that, what am I missing here? |
16:09:09 | Araq | 1) it's an edge case that likely will be supported in the future |
16:09:48 | Araq | 2) indeed 'var a = b' is necessary to initalize the iteration process properly |
16:10:00 | Klaufir | I see, thanks |
16:11:09 | * | hoverbear joined #nimrod |
16:11:20 | * | hoverbear quit (Max SendQ exceeded) |
16:11:37 | Araq | but they do have the same type, there is no special typing rule in the compiler for 'var a = b' when b is a closure iterator |
16:11:47 | * | bjz quit (Ping timeout: 260 seconds) |
16:11:51 | * | hoverbear joined #nimrod |
16:12:04 | Klaufir | I see that now, it was a misunderstanding on my part |
16:13:00 | Araq | thinking about it ... I think it's a clear bug that 'for x in foo()' doesn't work when 'foo' is a parameter |
16:13:52 | EXetoC | nimrod isn't haskell? good rationale, nice and short |
16:15:03 | Araq | it's also not C# for that matter |
16:15:47 | Araq | I know it's hard to take, but I created nimrod because there was no other language that supports the style of programming that I'm really happy with. |
16:17:20 | dom96 | There is no reason for Nimrod not to support other styles of programming |
16:18:00 | EXetoC | there are plenty of other higher-order functions |
16:18:24 | Araq | dom96: it's however a good reason to focus on other things, that I consider more important. |
16:18:47 | EXetoC | some of which are less trivial obviously, and I do like compact code. so did you last time I checked |
16:19:17 | flaviu1 | Hmm, it seems that certain type classes create stackoverflows while compiling |
16:19:53 | dom96 | Araq: For yourself perhaps. Others should be able to do whatever they please with their time. |
16:20:36 | Araq | dom96: ok so somebody else should fix the remaining closure iterator bugs... |
16:20:49 | flaviu1 | dom96: I don't think he'd reject patches, just that he doesn't have time to do it himself |
16:20:59 | dom96 | No. My point is Klaufir should write a lazy seq implementation. |
16:21:12 | dom96 | Instead of using ifs and for loops |
16:22:02 | flaviu1 | A generic lazy seq implementation is difficult at this time, because of the stackoverflow bug I just came across |
16:22:16 | EXetoC | focus on other things? yeah absolutely. these are just luxuries after all |
16:22:39 | dom96 | flaviu1: Why are type classes required for this? |
16:22:44 | EXetoC | flaviu1: with "normal" type classes? |
16:23:00 | flaviu1 | No, user defined |
16:23:08 | Araq | flaviu1: usually endless recursions are rather easy to fix, please try it |
16:23:12 | EXetoC | they're basically unusable like I said |
16:23:32 | flaviu1 | I'm looking into the issue already, I'll see if I can fix it |
16:23:49 | Araq | thank you |
16:24:20 | Klaufir | Araq: I know its not Haskell, overall, I am very satisfied with Nimrod , nowadays I don't have to package the whole of python with a small script to distribute and contrary to python, nimrod its awesomely fast |
16:25:13 | EXetoC | for example, matching doesn't work properly in conjunction with type parameters: "proc put*[T](o: TContainer[T], val: T)" |
16:25:14 | flaviu1 | dom96: So that the lazy sequence is open ended and can be chained without forcing it. |
16:25:33 | flaviu1 | dom96: https://gist.github.com/0754bd3ff9733f1dc831 |
16:25:59 | EXetoC | so you're limited to something like this for now: "proc put(o: TContainer[int], val: int)" |
16:26:26 | Araq | EXetoC: as soon as the compiler optimizes away the closure iterators completely, I don't mind this style. Though it makes me wonder if the programmer ever used a debugger with long LINQ expressions. I have ... |
16:26:31 | Araq | EXetoC: did you report this? |
16:26:59 | EXetoC | yep. I just I'd inform him of this |
16:27:41 | EXetoC | *I thought I'd just... |
16:31:21 | Araq | so ... with the most recent changes, my promises are exactly "data flow variables" |
16:31:42 | Araq | is FlowVar[T] a good name? |
16:32:04 | dom96 | flaviu1: You could probably get rid of the typeclass, just use a T and the compiler would complain if that type doesn't have an `items` proc declared for it. |
16:32:24 | Araq | correct |
16:32:45 | dom96 | or simply restrict it to seq |
16:33:03 | flaviu1 | dom96: Restricting to seq would not work in this case |
16:33:17 | dom96 | why? |
16:33:35 | * | brson joined #nimrod |
16:33:58 | * | cagano joined #nimrod |
16:34:00 | flaviu1 | Because then LazyMap(input:LazyFilter) would be invalid |
16:34:06 | * | cagano left #nimrod ("WeeChat 0.4.3") |
16:34:25 | * | brson_ joined #nimrod |
16:34:26 | EXetoC | Araq: so basically you don't like the idea of chaining several higher-order functions. yup, sounds reasonable :p |
16:35:06 | EXetoC | you can of course divide it into multiple statements instead, but that might not be a silver bullet |
16:37:37 | Araq | actually it kind of is |
16:37:54 | Araq | it's faster, easier to debug and IMHO clearer code |
16:38:03 | EXetoC | ok well there you go ^_^ |
16:38:11 | Araq | it uses the language builtin constructs |
16:38:38 | * | brson quit (Ping timeout: 255 seconds) |
16:38:49 | Araq | and it encourages 1 pass over the data instead of multiple passes |
16:39:24 | Araq | it's not a micro optimization even, properly optimizing away the overhead can only be done by Haskell for now, afaik |
16:39:38 | Klaufir | https://github.com/rtsisyk/luafun |
16:39:48 | dom96 | flaviu1: Create an object variant |
16:39:57 | flaviu1 | Stupid idea probably, but assuming that the closure is pure, couldn't it be made do separate passes, which would solve the debugging problem in debug builds? |
16:40:27 | EXetoC | builtin constructs? I hope we're talking about the same thing now |
16:41:09 | flaviu1 | dom96: I originally did, but I didn't like it originally for some reason I don't remember. I'll just try to fix the bug |
16:42:15 | EXetoC | Araq: some things can be inlined, but I guess you might be refering to cache locality as well |
16:42:53 | * | hoverbear quit () |
16:43:41 | * | q66 joined #nimrod |
16:43:41 | * | q66 quit (Changing host) |
16:43:41 | * | q66 joined #nimrod |
16:43:56 | EXetoC | I'm sure such rewrites are tricky, but it'd be pretty cool indeed |
16:44:05 | dom96 | flaviu1: Alright. If you get it working put it on nimrod-by-example, it's a nice use case for type classes |
16:44:37 | flaviu1 | I didn't consider that, but yes, it would be |
16:44:43 | * | hoverbear joined #nimrod |
16:45:37 | EXetoC | you mean things like turning a map/filter chain into a single pass? |
16:46:00 | Araq | EXetoC: yes |
16:46:40 | Araq | Klaufir: good point, but I was talking about general deforastation, which LuaJit doesn't perform. Or maybe it does. |
16:47:21 | Araq | but yes, LuaJIT is incredible. |
16:47:27 | Klaufir | Araq: what do you mean by deforastation ? |
16:47:31 | dom96 | Damn. JITs are causing harm to the environment too now? |
16:49:02 | EXetoC | ok well you'll just have to remove some abstractions for now if profiling suggests that it is a good idea |
16:49:49 | Araq | Klaufir: http://www.haskell.org/haskellwiki/Deforestation |
16:50:22 | flaviu1 | Araq: There isn't any way to transverse up a tree, right? |
16:50:29 | flaviu1 | AST tree |
16:50:44 | Araq | the paper I read was also about removing e.g. intermediate trees, the above definition only deals with "lists" though |
16:50:55 | Araq | maybe I misremember |
16:51:09 | flaviu1 | https://en.wikipedia.org/wiki/Deforestation_%28computer_science%29 |
16:51:25 | flaviu1 | No, you're correct: "Deforestation: transforming programs to eliminate trees" |
16:51:34 | Araq | flaviu1: yes and that's a consious design decision |
16:52:09 | EXetoC | sounds like a fun weekend project |
16:52:10 | flaviu1 | I'm not asking to be able to do it, I know it makes some things a lot more complicated |
16:53:06 | Araq | it also makes the language essentially unsound |
16:53:36 | Araq | if you can go up within a macro lots of nasty things are possible |
16:54:08 | flaviu1 | I don't want to do it, I was just curious because it seems to be stuck in a loop transforming the exact same tree |
16:55:26 | flaviu1 | Yes, and it throws away essentially all hope of being able to parallelize things |
16:56:57 | Araq | parallelization still is a minor thing. get the compilation cache to work and nobody will ever complain about compile times again. |
16:57:38 | flaviu1 | Do people complain of compile times? Seems pretty fast to me. |
16:59:15 | EXetoC | indeed |
17:01:48 | * | kunev joined #nimrod |
17:02:41 | * | Changaco quit (Ping timeout: 264 seconds) |
17:04:53 | * | brson_ quit (Ping timeout: 240 seconds) |
17:06:20 | EXetoC | still working on parallel stuff? |
17:07:07 | * | brson joined #nimrod |
17:12:00 | * | brson quit (Client Quit) |
17:12:35 | * | brson joined #nimrod |
17:14:09 | * | Changaco joined #nimrod |
17:19:53 | * | brson quit (Ping timeout: 252 seconds) |
17:21:28 | Araq | EXetoC: I'm documenting it |
17:21:39 | Araq | that doesn't mean it's finished though |
17:22:01 | Araq | but I'm quite happy with my design now |
17:22:04 | Araq | bbl |
17:23:07 | * | brson joined #nimrod |
17:26:41 | * | Changaco quit (Ping timeout: 264 seconds) |
17:29:53 | * | Changaco joined #nimrod |
17:48:38 | * | brson quit (Ping timeout: 245 seconds) |
17:50:43 | * | brson joined #nimrod |
17:51:20 | * | brson_ joined #nimrod |
17:55:23 | * | brson quit (Ping timeout: 260 seconds) |
18:01:53 | * | brson_ quit (Ping timeout: 255 seconds) |
18:03:34 | * | brson joined #nimrod |
18:09:22 | * | Matthias247 joined #nimrod |
18:13:01 | * | askatasuna joined #nimrod |
18:14:49 | flaviu1 | C's a bit obsessed with backwards compatibilty: _Alignas, _Alignof, _Atomic, _Bool, _Complex, _Generic, _Imaginary, _Noreturn, _Static_assert, _Thread_local :P |
18:17:49 | * | kunev quit (Ping timeout: 252 seconds) |
18:18:20 | * | kunev joined #nimrod |
18:20:07 | * | Jehan_ joined #nimrod |
18:25:47 | * | askatasuna quit (Quit: WeeChat 0.4.3) |
18:26:33 | Araq | Jehan_: with the recent ptr->ref changes my Promise is now exactly a dataflow variable |
18:26:48 | Jehan_ | Araq: Nice. :) |
18:27:02 | Araq | so I will name it FlowVar[T] |
18:28:14 | Jehan_ | Araq: Honestly, if you felt strongly about Promise or Future, I'd be fine with that, too. |
18:28:21 | Jehan_ | These things can be very subjective. |
18:28:58 | Araq | well I figured I still dislike Pending because it's not a noun. "Returns a Pending ..." sounds weird |
18:29:05 | Jehan_ | I do think that it's generally better to name constructs in a way that's natural for "normal" people and not just experts in the field. |
18:29:21 | Jehan_ | Araq: Yeah, I didn't exactly consider it perfect, either. |
18:29:54 | Jehan_ | Araq: It's more that I'm against research slang rather than in favor of any particular name. |
18:30:07 | Jehan_ | And even research slang is sometimes the least bad choice. |
18:30:28 | flaviu1 | Returns a pending Foo doesn |
18:30:32 | flaviu1 | 't sound too bad |
18:31:56 | Araq | RawPeding or PendingBase sux though |
18:34:20 | Araq | dom96: I think I also solved your "parallel server" problem |
18:34:40 | flaviu1 | Araq: What is the reason `void* volatile locals` is volatile? |
18:34:52 | Jehan_ | Reading over the logs: Incremental compilation would be nice, but not the most pressing issue, I think. As long as you use clang, not gcc. :) |
18:35:06 | Jehan_ | I think there are very few people compiling 500 KLOC+ programs. :) |
18:35:16 | Araq | gcc is fast enough for me |
18:36:10 | dom96 | Araq: tell me |
18:36:17 | Jehan_ | Araq: It's not too bad, but for large stuff, clang has an advantage. |
18:36:43 | Jehan_ | Especially given that under certain circumstances, compile times for gcc are not linear in the length of a file. |
18:37:14 | Jehan_ | And I mean with -O0, and no funny interdependencies. |
18:37:42 | Araq | dom96: 'spawn' will support passing ref/closures via a deep copy |
18:38:25 | Araq | so you can pass your future via spawn to a worker thread |
18:38:41 | flaviu1 | Jehan_: The compiler is 50 KLOC, I wonder if there's 500 KLOC total for all nimrod programs |
18:39:11 | Jehan_ | flaviu1: I have compiled at least one 700 KLOC program. |
18:39:18 | flaviu1 | In nimrod? |
18:39:21 | Jehan_ | Yes. |
18:39:38 | Jehan_ | Admittedly, that was mostly generated code for a set of complex mathematical functions. |
18:39:45 | fowl | ._. |
18:39:53 | Jehan_ | Think of most of it as tables in code form. |
18:41:32 | Jehan_ | Incidentally, this is one reason why I (subjectively) prefer Pascal over Python syntax sometimes. Generating code for a whitespace-sensitive language brings some extra annoyances with it. |
18:41:39 | EXetoC | getStackTrace in unittest.test only includes the offending line in my test module |
18:41:51 | EXetoC | doesn't tell me much |
18:42:03 | flaviu1 | Jehan_: Can you post an excerpt from the program? |
18:42:37 | EXetoC | what's to blame? |
18:43:10 | Jehan_ | flaviu1: Not at the moment, I'd have to find it again first. This was like a year ago. |
18:43:26 | Araq | EXetoC: I have no idea |
18:43:36 | EXetoC | hm ok |
18:46:00 | Araq | dom96: a major problem is that once the spawned function is started, it can't register stuff to the dispatcher |
18:46:58 | Araq | it can add it to a thread local dispatcher though |
18:47:09 | Araq | no idea if that's a problem |
18:49:57 | * | ehaliewicz joined #nimrod |
18:50:04 | dom96 | The way IOCP is designed is that there should only be one dispatcher. |
18:50:17 | dom96 | Multiple threads should be polling it at the same time. |
18:50:33 | dom96 | I'm not entirely sure what you have in mind |
18:53:40 | * | brson quit (Ping timeout: 260 seconds) |
18:55:30 | Araq | hmm |
18:56:17 | Araq | what's the problem again? that you wrapped the dispatcher in a 'ref' instead of a 'ptr'? |
18:57:28 | dom96 | huh? |
18:57:46 | dom96 | The problem is I don't know how to approach this. |
18:58:44 | dom96 | if I run multiple poll loop in different threads then the same closure iterator may be resumed in different threads. |
18:58:47 | Matthias247 | i think creating an independent iocp on each thread should also be possible |
18:58:49 | dom96 | *loops |
18:58:54 | Matthias247 | with asio you can do it at least |
18:59:15 | Matthias247 | you can also create only one and use it from mulitple threads |
18:59:38 | Matthias247 | but that model has problems when you use it on Non-Windows |
18:59:42 | dom96 | Matthias247: Yes, but if you use one from multiple threads then Windows will ensure that workload is distributed evenly between the threads. |
19:00:05 | dom96 | We can do it differently on different platforms. |
19:01:17 | Matthias247 | yes, then each thread that is waiting will get results from the queue |
19:01:25 | Matthias247 | the question is: Do you really want that? :) |
19:02:21 | dom96 | I'm not sure if I can register an fd with multiple IOCPs. |
19:02:41 | Matthias247 | .net uses the pattern. Then you receive a callbacks (e.g. to send and receive) on arbitrary threads |
19:03:35 | Matthias247 | what I then did is either immediatly redirect the callback to one of my event loops or to use lots of locks |
19:03:55 | dom96 | brb |
19:03:59 | Matthias247 | the first thing destroys all benefits, the second on is complex |
19:04:42 | Araq | Matthias247: both are unlikely to work with nim though |
19:05:44 | Araq | maybe we can solve it with a low level wrapper around IOCP |
19:12:37 | Araq | Matthias247: I'm curious. why did you have the same problem? |
19:13:15 | Matthias247 | Araq: What do you mean with the same problem? :) |
19:13:32 | Araq | "what I then did is either immediatly redirect the callback to one of my event loops or to use lots of locks" |
19:14:33 | Matthias247 | spent the last 2,5 years mainly writing middleware on top of TCP/IP on top of boost asio in c++ and the .net framework |
19:15:01 | Matthias247 | and started like dom, with using a lot of threads which were processing the events |
19:16:45 | Matthias247 | but then at some point realized the multithreading at that point produces some very complex conditions, is hard to maintain and is just not worth it. So I'm converting to more or less singlethreaded event loops |
19:19:23 | Araq | well it's a hard problem indeed |
19:19:47 | Araq | usually I'd use multi-processing and call it a day |
19:19:58 | Araq | but we want to win benchmarks ... |
19:20:01 | Araq | however |
19:20:16 | Araq | the benchmarks are not run on windows anyway, right, dom96? |
19:20:24 | Matthias247 | winning benchmark also ... depends |
19:21:04 | Matthias247 | With lots of threads and message passing between them I had higher throughput. But worse latency |
19:21:47 | Matthias247 | on windows not that much, but on linux it was quite drastic |
19:24:10 | Jehan_ | The performance of a multi-threaded application depends on a lot of things in practice, unfortunately. |
19:24:14 | * | Jesin joined #nimrod |
19:25:13 | Jehan_ | Matthias247: What do you mean by latency in this context? |
19:26:33 | Matthias247 | Jehan_: I measured the round-trip-time between 2 communicating processes. One sends a message, the other receives it and sends a response to the first. |
19:27:03 | Jehan_ | Matthias247: Hmm, you said threads earlier, not processes? |
19:27:44 | Matthias247 | Jehan_: yes. The latency depends on how many threads are used in each of the processes - how threads are used to receive and transmit messages |
19:28:46 | Jehan_ | Matthias247: Ah, yes. Were you using epoll, select, or simply multiple threads that were all reading/writing a single connection each? |
19:29:16 | Matthias247 | Jehan_: I tested several approaches |
19:29:31 | Matthias247 | from blocking reads and writes on a single thread |
19:29:44 | Matthias247 | to blocking reads in a background thread |
19:30:03 | Matthias247 | doing everything asynchronously in one thread using (e)poll |
19:30:32 | Matthias247 | and using multiple threads running on one asio reactor instance where an arbitrary thread will receie the message and probably another one sends it |
19:31:00 | Matthias247 | the performance drops about in the way I've written it here |
19:31:10 | Jehan_ | Matthias247: Hmm, this is odd. |
19:32:12 | Jehan_ | Matthias247: In my experience, write()/read() over a domain socket etc. is little different in performance from using memcpy() with shared mmap() memory region. On Linux, that is. |
19:32:36 | Matthias247 | blocking calls give the lowest latency. Event loop is in between. And passing data between multiple event loops (threads) was noticeable slower |
19:33:12 | dom96 | Araq: they are |
19:33:15 | Jehan_ | Difficult to say more without knowing the specifics, though. |
19:33:19 | dom96 | Araq: However, Linux is more important. |
19:33:28 | dom96 | So perhaps we should focus on it |
19:33:36 | Araq | dom96: yup |
19:34:38 | Jehan_ | As a practical matter, I think right now a lot of programmers would value ease of use over squeezing the last microsecond out of performance when it comes to concurrency. |
19:35:28 | Jehan_ | See, e.g., how Go's pretty simple concurrency model has become very popular. |
19:35:33 | fowl | accept my PRs please |
19:35:50 | Matthias247 | for sure |
19:36:21 | Jehan_ | And it's become popular because it's basically easy to understand and use. |
19:39:17 | Matthias247 | if anyone is interested: This is how I now started implementing some async future-based IO in C++: https://gist.github.com/Matthias247/c2188248ddc5b597a897 |
19:39:35 | * | brson joined #nimrod |
19:40:31 | Matthias247 | future.then takes an executor as a parameter. So all the callbacks get invoked in the same executor/thread |
19:43:53 | fowl | Matthias247, why do you capture exec? looks like it is only used in read_fut_3 |
19:44:30 | Matthias247 | fowl: probably because the example evolutionized and there was copy&paste involved ;) |
19:44:43 | fowl | ah |
19:45:47 | fowl | Matthias247, can we do this in nimrod? |
19:46:17 | Matthias247 | yes. It's similar to dom96's async io |
19:46:54 | Matthias247 | with the different that ability to chain futures is missing |
19:47:12 | Matthias247 | but instead there is async/await |
19:47:59 | fowl | we could have both couldnt we |
19:48:06 | Matthias247 | yes |
19:49:24 | Matthias247 | another thing is that it c++ i can complete futures from different threads and also set an arbitrary executor to execute the "then" callback. This would be hard in current Nimrod because of the thread-local garbage collector |
20:00:04 | * | Mat3 joined #nimrod |
20:00:14 | Mat3 | Good Day |
20:00:34 | EXetoC | hi |
20:00:43 | Mat3 | hello EXetoC |
20:03:35 | dom96 | Araq: What do you think is the best way to parallelize epoll then? |
20:06:30 | Araq | dom96: I have no idea, does epoll do the same as IOCP wrt threading? |
20:06:39 | dom96 | no |
20:07:13 | dom96 | With epoll we should probably create an epoll instance per thread |
20:07:32 | Araq | alright. |
20:07:38 | Matthias247 | that's what I would do too |
20:07:54 | Matthias247 | or in other words: one epoll instance per dispatcher |
20:09:38 | dom96 | But it really is not that simple. |
20:10:29 | dom96 | hrm |
20:10:40 | dom96 | I suppose each epoll instance would get the same file descriptors |
20:11:24 | dom96 | and then each thread would poll its own epoll instance |
20:12:00 | dom96 | But then we would have the issue that multiple threads would get the same info about which file descriptors are ready |
20:12:17 | dom96 | causing the same callback to be run in two or more threads |
20:15:39 | Araq | dom96: I don't think this can happen. 2 epolls returning the same file/socket IDs? |
20:17:32 | dom96 | of course it can |
20:17:58 | dom96 | All epoll does is tell you which fds are readable/writeable |
20:18:45 | Araq | well it shouldn't hard to filter this away |
20:20:40 | EXetoC | I'm gonna do some serious unit testing with the help of metaprogramming. 1000 checks > 100 checks :-) |
20:20:54 | Araq | in the worst case we can use a static hash table of Locks for the IDs and do some tryAquire+release dance |
20:29:22 | dom96 | Araq: Won't that be extremely inefficient? |
20:33:06 | dom96 | But the bigger problem is resuming the iterators. |
20:33:15 | dom96 | We can't do that from a different thread |
20:33:40 | Araq | dom96: I still misunderstand the problem I guess |
20:34:00 | dom96 | Araq: Have you read my blog post? |
20:34:00 | Araq | in my mind only the "initial" socket id is a problem |
20:34:12 | Araq | dom96: yes. 3x. |
20:34:23 | Araq | it's been a while |
20:36:45 | dom96 | well i'm not sure what you mean |
20:37:39 | dom96 | In the case of a server what we want is for all threads to be waiting for a client connection |
20:38:12 | dom96 | at least in the initial stages |
20:39:45 | Matthias247 | imho there are 2 solutions for this |
20:40:07 | Matthias247 | a) allow to use I/O objects (everything with an FD) only on the single thread/dispatcher that owns them |
20:40:31 | Matthias247 | b) let the user care about the relevant locking that is required to use such objects from multiple threads |
20:45:43 | dom96 | Argh, looking at the code it's really hard. |
20:46:12 | dom96 | A server usually looks like this: |
20:46:22 | dom96 | let client = await server.accept() |
20:46:29 | dom96 | processClient(client) |
20:46:35 | dom96 | in an infinite loop |
20:47:36 | Matthias247 | yes. But processClient should only start to processs/interact with the client. Not wait until it's done ;) |
20:47:48 | dom96 | Indeed. |
20:47:51 | dom96 | That is what it does. |
20:48:20 | dom96 | It's a case of performing this loop in multiple threads |
20:48:26 | Matthias247 | in my current thoughts I would have some functionality that would allow me to move objects between threads under certain circumstances. |
20:48:40 | dom96 | or perhaps a simple case of spawning 'processClient'? |
20:48:45 | Matthias247 | E.g. I could make something like client->move_to_other_dispatcher after accepting |
20:49:00 | Matthias247 | but it would not be possible anymore after the client registered already with an dispatcher |
20:51:16 | Matthias247 | but that would still require manual scheduling. And is therefore different than what Go does |
20:51:28 | Matthias247 | -scheduling +load balancing |
21:07:05 | * | Changaco quit (Quit: Changaco) |
21:15:15 | * | hoverbea_ joined #nimrod |
21:16:58 | * | hoverbear quit (Ping timeout: 245 seconds) |
21:26:05 | * | Mat3 quit (Quit: Verlassend) |
21:27:33 | fowl | accept my PRs please |
21:27:33 | fowl | dom96, Araq ^ |
21:28:23 | fowl | #1174 and #1243 |
21:34:53 | * | kunev quit (Ping timeout: 245 seconds) |
21:39:33 | NimBot | Araq/Nimrod devel 4ae9486 Billingsly Wetherfordshire [+0 ±1 -0]: fix #1241 |
21:39:33 | NimBot | Araq/Nimrod devel 361b9fe Dominik Picheta [+0 ±1 -0]: Merge pull request #1243 from fowlmouth/patch-4... 2 more lines |
21:40:10 | dom96 | I'll let Araq merge the other one |
21:43:12 | dom96 | Araq: https://news.ycombinator.com/item?id=7851274 |
21:43:19 | dom96 | May be of interest |
21:43:43 | Araq | fowl: your hash sucks but fine |
21:43:52 | NimBot | Araq/Nimrod devel 4099abc Billingsly Wetherfordshire [+0 ±1 -0]: added `==` for PJsonNode |
21:43:52 | NimBot | Araq/Nimrod devel ac797e1 Billingsly Wetherfordshire [+0 ±1 -0]: added json.hash |
21:43:52 | NimBot | Araq/Nimrod devel 2dba171 Andreas Rumpf [+0 ±1 -0]: Merge pull request #1174 from fowlmouth/patch-2... 2 more lines |
21:55:31 | fowl | Araq, ouch |
22:04:30 | Jehan_ | dom96: Interesting. I haven't looked much at Rust (because I don't care about paying all the syntactic overhead to avoid GC), but I'm surprised that they have such a hardcore stance against shared memory. |
22:05:18 | Jehan_ | In reality, you cannot avoid using shared memory for a lot of important stuff that doesn't scale for message passing. |
22:05:48 | Jehan_ | So any language that makes shared memory cumbersome to use falls short for many important use cases. |
22:06:51 | flaviu1 | From what I understand about rust is that you can do whatever you want, as long as its inside an unsafe block. So you should be able to get shared memory if you really need it. |
22:07:41 | Araq | same for nimrod really, only we don't need 'unsafe' as it's redundant |
22:07:56 | Araq | addr, cast and ptr are all keywords for a reason |
22:08:05 | Jehan_ | flaviu1: Hmm, I may have to look at it. |
22:08:31 | Jehan_ | My problem with Rust is really that it's too specific a language. |
22:08:49 | Jehan_ | I don't deny that the niche they're designing for exists, but … that it's a niche. |
22:09:34 | Araq | fowl: your PR breaks bootstrapping. well done |
22:09:54 | EXetoC | :< |
22:10:00 | flaviu1 | TBH I like having `unsafe` blocks because it increases syntactic overhead and discourages it, as well as being easy to grep for. They could also have certain properties to prevent unexpected things from happening like GC collections, though that's less of an issue. |
22:10:03 | dom96 | Araq: be nice to the poor fella |
22:10:15 | Jehan_ | Poor fowl. :) |
22:10:28 | flaviu1 | Jehan_: http://rustbyexample.com/unsafe.html |
22:10:33 | Jehan_ | flaviu1: The thing is, shared memory shouldn't be unsafe to begin with. |
22:10:55 | dom96 | Araq: It's because of case sensitivity hah |
22:11:12 | Araq | dom96: please fix it then, I'm busy |
22:11:22 | Jehan_ | If you can screw up using shared memory other than resulting in a defined exception, the design is broken. |
22:11:44 | EXetoC | 'x and y == z' is parsed as 'x and (y == z)' with #!strongSpaces I think. is this intended? |
22:12:15 | fowl | EXetoC, in stronspaces i dont think there is operator precedence |
22:12:22 | NimBot | Araq/Nimrod devel 69a5954 Dominik Picheta [+0 ±1 -0]: Capitalised enum value names in JSON module. |
22:13:10 | EXetoC | there should be |
22:13:39 | EXetoC | 'x + y * z' should be equivalent to 'x + (y * z)' still. doesn't that imply operator precedence? |
22:13:40 | dom96 | Jehan_: If you're right it makes the fact that they have a higher user base even more depressing :| |
22:14:11 | flaviu1 | Jehan_: If its something like Java, where everything is somewhat atomic, then shared memory isn't unsafe, but rust (I don't think) gives that guarantee. Plus, they'd have to figure out how that interacts with borrowing and their other fancy features |
22:14:21 | Jehan_ | dom96: Consider that Java has an even higher user base. And then read: http://brinch-hansen.net/papers/1999b.pdf |
22:14:51 | Araq | EXetoC: I don't think so, I tested that with #!strongSpaces |
22:15:01 | Araq | er |
22:15:08 | flaviu1 | Jehan_: That's old, it doesn't apply anymore |
22:15:22 | Araq | 'x and y == z' is always 'x and (y == z)' |
22:15:26 | dom96 | Jehan_: Can't right now. |
22:15:32 | Jehan_ | flaviu1: In what way? |
22:16:08 | flaviu1 | They've significantly changed the java memory model |
22:16:14 | EXetoC | bbah |
22:16:18 | Jehan_ | flaviu1: That's not about the memory model. |
22:16:22 | dom96 | Jehan_: But yeah, Java's user base depresses me too. |
22:16:28 | Jehan_ | flaviu1: It's about the language features. |
22:17:08 | Jehan_ | It's about them naming something monitors that aren't actually monitors. |
22:18:49 | flaviu1 | I don't really know the difference there, sorry |
22:19:11 | Jehan_ | flaviu1: Monitors avoid data races. Java doesn't avoid data races. |
22:19:39 | Jehan_ | The changes to the MM affect some very subtle and important semantics, but doesn't change that the programming model is broken. |
22:21:25 | Matthias247 | my rust code was so flooded with unsafe keywords that I don't know if that's really worth it ;) |
22:22:09 | bstrie | Matthias247: unsafe rust is still safer than C :P |
22:22:54 | Matthias247 | i think it's the same. Both can be perfectly safe ;) |
22:23:28 | Jehan_ | bstrie: I think that's damning with faint praise. :) |
22:23:59 | Jehan_ | That said, I like the ideas behind Rust. |
22:24:35 | Jehan_ | I'm just skeptical that there are a great many people for whom it's important to avoid garbage collection. |
22:24:46 | bstrie | Matthias247: both *can* be perfectly safe. it's just that with C, you can never know for sure :P |
22:24:53 | Jehan_ | Kernel programmers, Firefox developers, AAA video games? |
22:25:33 | * | superfunc joined #nimrod |
22:25:46 | Araq | no, you can't be safe with a language in which every basic operation is full of undefined semantics |
22:26:03 | Araq | C *cannot* be safe |
22:26:23 | * | io2 quit (Quit: ...take irc away, what are you? genius, billionaire, playboy, philanthropist) |
22:26:35 | Matthias247 | it depends on what's your definition of safe |
22:26:48 | bstrie | Araq: quantum effects could spontaneously produce a program that is entirely safe, assuming a single version of a single compiler |
22:27:23 | Matthias247 | mine is: Beeing able to write programs that execute 100% correctly. And that's possible in C |
22:28:22 | bstrie | I think what we're trying to say is, it's theoretically possible, just infinitely unlikely :) |
22:28:38 | Araq | Matthias247: unless you're talking about some C subset that you prove correct with tools, you won't get 100% |
22:29:04 | Matthias247 | bstrie: be aware that 99% of your SAFETY CRITICAL applications are using C |
22:29:19 | Matthias247 | if they would be all broken you might be already dead :) |
22:30:12 | Jehan_ | Matthias247: I'm trying not to think of that. :) |
22:30:14 | bstrie | Matthias247: they might not be broken for some inputs, which is why I'm still alive. for some inputs, they are all definitely broken. our lives hang on a thread! |
22:30:27 | superfunc | Sure, they are possible to write, proven by their existence. But, being able to write safe, correct code in a reasonable time, with average level developers? C is not |
22:31:02 | Matthias247 | bstrie: even though that might be true. Rust also doesn't solve that |
22:31:25 | superfunc | I think its more of a question of "does the language facilitate safer programming" rather than a guarantee |
22:31:27 | Jehan_ | I'm thinking of the OpenSSL debacle right now. |
22:31:37 | dom96 | I think that because developers are lazy they will overuse Rust's unsafe features to the point that the safety which Rusts adds won't matter much :P |
22:31:56 | dom96 | well, *most developers :P |
22:32:18 | Jehan_ | dom96: Well, as the saying goes, you can write Fortran in any language. |
22:32:48 | Jehan_ | You cannot stop someone determined enough from shooting themselves in the foot. |
22:33:33 | Jehan_ | But you can put safety catches on guns regardless and it's generally considered a good idea. |
22:33:33 | * | superfunc quit (Quit: Page closed) |
22:34:01 | flaviu1 | dom96: If you give them sufficiently powerful and easy-to-use tools, most developers will be too lazy to use unsafe features |
22:34:09 | fowl | Jehan_, the safety on a weapon is a good idea, you disagree? |
22:34:27 | Jehan_ | fowl: No, I don't? My point was that it is a good idea. |
22:34:29 | dom96 | flaviu1: The thing is, are Rust's safety features easy to utilize? |
22:34:33 | Matthias247 | the question is what helps more: To make error-prone things harder or to make "safer" things easier :) |
22:34:47 | flaviu1 | Matthias247: Why not both? |
22:34:59 | fowl | Jehan_, sry, misread you |
22:35:02 | dom96 | IIRC there are cases where the safety features get in the way of a clean algorithm. |
22:35:02 | fowl | your msg* |
22:35:25 | flaviu1 | dom96: That means the safety features are not powerful enough |
22:35:50 | Jehan_ | dom96: Well, you can do safety wrong, obviously. |
22:36:06 | Matthias247 | flaviu1: if it is doable it would be great. but somehow these things are conflicting |
22:36:44 | flaviu1 | I really like the fine-grained safety of nimrod, where I can turn off and on only the features I want. I still get bounds checks, but I can do stuff like overflow ints |
22:36:56 | flaviu1 | Matthias247: How so? |
22:37:17 | Matthias247 | try rust and you will know ;) |
22:37:28 | dom96 | well my point is that there are advantages and disadvantages to both approaches: rust's safety features and a GC |
22:37:39 | flaviu1 | You can use GC in rust |
22:38:12 | dom96 | I am sure that these safety features are not perfect, and my suspicions tell me that they would get in my way more than they would help me. |
22:38:13 | Jehan_ | flaviu1: Correct, but the language is still weighed down by all the other things just in case you don't want to use it. |
22:39:40 | Jehan_ | flaviu1: Don't get me wrong, if Rust confined C++ to the, umm, rust heap of history, I'd be extremely happy. :) |
22:41:04 | Matthias247 | I think swift has better chances ;) |
22:41:25 | flaviu1 | Matthias247: No one's going to use swift because of vendor-lockin |
22:41:33 | * | springbok joined #nimrod |
22:41:37 | flaviu1 | unless they opensourced it and I haven't heard |
22:41:40 | Jehan_ | Matthias247: Assuming that it ever runs on a non-Apple device and that they fix the performance issues, yeah. |
22:41:40 | * | brson_ joined #nimrod |
22:41:50 | Matthias247 | noone is a bit hard. The apple guys will use it ;) |
22:41:52 | Jehan_ | flaviu1: Yeah. |
22:42:03 | dom96 | I wonder if Go ever becomes an official Android language. |
22:42:04 | EXetoC | flaviu1: since when do most professional devs care :p |
22:42:11 | dom96 | *will ever become |
22:42:17 | EXetoC | or maybe they care more nowadays, dunno |
22:42:35 | Matthias247 | but I think it has at least some chances to get opened |
22:42:41 | Matthias247 | dom96: don't think that will happen |
22:42:42 | Jehan_ | dom96: I'd give Dart better chances. |
22:42:47 | flaviu1 | EXetoC: Well, its useless if they develop for anything but IOS, so they might care a bit |
22:43:01 | Matthias247 | Gos programming model doesn't fit UI applications |
22:43:20 | Jehan_ | Dart is really impressive, and if it weren't JIT-only, I'd use it for a lot more things. |
22:43:21 | EXetoC | the mobile market is more fragmented I guess |
22:43:22 | Matthias247 | yes, Dart fits that much more |
22:43:27 | flaviu1 | Matthias247: Not deeply familiar with Go, but why not? |
22:44:22 | Matthias247 | flaviu1: because it builds heaviliy on the goroutine - do anything on any thread - model. And UIs are typically single-threaded environments |
22:44:45 | * | OrionPK quit (Ping timeout: 252 seconds) |
22:44:55 | Araq | Matthias247: that's only a problem if the UI hasn't been written in Go |
22:44:56 | * | brson quit (Ping timeout: 255 seconds) |
22:45:00 | dom96 | Jehan_: Isn't Dart a compile-to-js language? |
22:45:14 | Jehan_ | dom96: No. Compilation to JS is one option. |
22:45:25 | Jehan_ | But it also has its own JIT compiler. |
22:45:32 | dom96 | oh, didn't know that. |
22:45:53 | Jehan_ | Whatever you think of the language design, the compiler technology side of Dart is pretty impressive. |
22:45:56 | Matthias247 | Araq: if you write it in Go you would still have to care heavily about the whole event synchronization. And for that you would most likely use a single thread/goroutine |
22:46:02 | Jehan_ | Including the dart2js stuff. |
22:46:15 | Matthias247 | Jehan_: the APIs are also designed in a great way |
22:47:13 | Matthias247 | and they responded to each of my bug reports immediatly - so I like it ;) |
22:47:32 | Jehan_ | It does amuse me that Go gets a lot more publicity than Dart, even though a lot more language design experience and engineering effort has gone into Dart. |
22:48:18 | Jehan_ | As I said, the main problem I have with Dart is nothing that's really its fault, but that I'm a bit gun-shy when it comes to JIT-only implementations. |
22:48:37 | Matthias247 | probably because the main audience (web devs) are busy with checking out their 1001 JS frameworks ;) |
22:49:36 | Jehan_ | Nimrod still has more bugs (no offense, Araq), but if it blows up, I can track down and fix the stuff. If Dart blows up, I'm looking at a black box with no idea where to start. |
22:49:43 | Matthias247 | but for server side it would also easily beat node.js. But nobody seems to care |
22:49:54 | Jehan_ | Matthias247: Yeah. |
22:50:12 | Jehan_ | I think it's something of a misconception about it being a JS replacement. |
22:50:40 | * | OrionPK joined #nimrod |
22:51:35 | * | brson_ quit (Quit: leaving) |
22:51:52 | * | brson joined #nimrod |
22:51:58 | flaviu1 | No one seems to have done a language based on inferred structural typing |
22:53:26 | flaviu1 | I guess Dart feels that dead code elimination is a major feature, they call it tree shaking |
22:53:30 | * | fowl afk grass mowing |
22:54:08 | EXetoC | fowl: be safe |
22:54:30 | fowl | heh i didnt know my client would send that to all servers |
22:54:37 | fowl | i just lost points in idlerpg :( |
22:55:30 | Jehan_ | What the hell is idlerpg? |
22:55:37 | flaviu1 | http://irc-wiki.org/IdleRPG |
22:55:57 | flaviu1 | probably the most useless thing I've seen |
22:56:13 | dom96 | damn, I should do that |
22:56:33 | fowl | there is one on this server, #idleRPG |
22:56:38 | EXetoC | flaviu1: just because you don't know how to play it |
22:57:46 | dom96 | meh, I joined and lost the will. |
22:58:11 | dom96 | I should rewrite that ogame clone for IRC I was writing in Python 5 years ago |
22:58:18 | fowl | oo do it dom |
22:58:32 | fowl | id like to help, i like ogame but having it in the browser is too distracting |
22:59:52 | dom96 | fowl: We should collaborate and write it together. :) |
23:00:50 | fowl | ok |
23:01:05 | fowl | i need to cut the grass first before it gets dark..ugh |
23:01:09 | fowl | bl |
23:01:09 | fowl | bbl |
23:01:22 | dom96 | well, I can't do it today or in the near future anyway |
23:01:28 | dom96 | after my exams possibly |
23:12:24 | * | Demos joined #nimrod |
23:15:36 | * | darkf joined #nimrod |
23:17:07 | * | freezerburnv joined #nimrod |
23:20:18 | * | Jehan_ quit (Ping timeout: 245 seconds) |
23:20:22 | * | Matthias247 quit (Read error: Connection reset by peer) |
23:29:18 | * | hoverbea_ quit () |
23:33:20 | * | Jehan_ joined #nimrod |
23:33:42 | Jehan_ | Well, so much for German engineering. :) |
23:34:04 | Araq | I thought you're sleeping now |
23:34:36 | Jehan_ | No, KD decided that now was a good time for me to take a break. |
23:34:45 | * | aboutGod joined #nimrod |
23:35:26 | Jehan_ | While I was in the middle of pulling a large repository update. |
23:36:39 | Araq | hi aboutGod |
23:44:53 | * | aboutGod left #nimrod (#nimrod) |
23:45:36 | Araq | ha that always happens when I talk to him |
23:45:57 | Araq | he's scared of me ... or a bot ... or both |