| 00:08:38 | Skrylar | hmm | 
| 00:08:48 | Skrylar | is there an obvious downside to using floats instead of ints for all gui layouts? | 
| 00:09:02 | Skrylar | i think GL-based interfaces already to that | 
| 00:12:33 | Demos | not really, although floats will not have meaning if you are talking in terms of pixels | 
| 00:13:18 | Skrylar | i usually do a 1:1 with pixels | 
| 00:13:22 | Skrylar | 1.0 in a float is a pixel | 
| 00:14:11 | Skrylar | subpixel precision isn't exactly unheard of (Apple does it with fonts on retina displays), and a projection matrix can normalize it back down while giving the designer a less insane unit than "percentages of the screen" | 
| 00:14:40 | Skrylar | screen percentiles only make sense if you have scalable graphics... and well, as neat as those are gamedevs don't tend to keep SVGs around for the UI | 
| 00:16:36 | * | Jesin quit (Ping timeout: 276 seconds) | 
| 00:46:20 | * | q66 quit (Ping timeout: 252 seconds) | 
| 00:50:30 | * | DAddYE quit (Remote host closed the connection) | 
| 00:50:44 | * | dogurness joined #nimrod | 
| 00:51:04 | * | DAddYE joined #nimrod | 
| 00:55:38 | * | DAddYE quit (Ping timeout: 255 seconds) | 
| 00:56:39 | Skrylar | Demos: at the moment i'm still positive to the idea of how cegui did things | 
| 00:56:54 | Skrylar | which is basically a relative+absolute float pair | 
| 00:57:29 | Skrylar | so (0.5,10) would mean something like "add 50%of the parent's width, then add 10px" | 
| 00:59:51 | * | dogurness quit (Quit: dogurness) | 
| 01:00:12 | * | nande joined #nimrod | 
| 01:04:08 | * | Demos_ joined #nimrod | 
| 01:04:25 | * | brson quit (Ping timeout: 252 seconds) | 
| 01:05:03 | * | brson joined #nimrod | 
| 01:11:12 | * | brson quit (Ping timeout: 276 seconds) | 
| 01:38:41 | * | Demos_ quit (Ping timeout: 264 seconds) | 
| 01:56:09 | Demos | Oh that is good | 
| 01:56:15 | Demos | I should do that in my GUI code | 
| 01:56:30 | Demos | but I want to get scroll bars working first | 
| 02:15:35 | * | xenagi quit (Quit: Leaving) | 
| 02:20:21 | renesac | Skrylar, on the efficiency side, integer operations are faster than float on CPU, and you likely need an smaller integer than float | 
| 02:20:46 | renesac | but floats seem more flexible, and easier to interface with opengl, I guess | 
| 02:21:03 | renesac | do you plan using float32 or float64? | 
| 02:38:36 | Demos | and this is GUI code, so this kind of micro optimization is not really worth it | 
| 02:55:42 | fowl | Demos, can i see what u have so far | 
| 02:56:38 | Demos | https://github.com/barcharcraz/Systemic | 
| 02:56:49 | Demos | build testapps/glfwtest for my little test scene | 
| 02:56:58 | Demos | just basic navigation and some editor buttons | 
| 02:57:33 | Demos | you could add obj files to the assets directory I guess as well, and create them in game (wow so editor) | 
| 02:59:13 | * | Varriount|Mobile joined #nimrod | 
| 02:59:22 | fowl | :/ glfw.nim(283, 31) Error: cannot evaluate at compile time: count | 
| 02:59:42 | fowl | maybe i should update it | 
| 03:00:12 | * | Kelet quit (Quit: nn) | 
| 03:02:12 | fowl | Demos, do you have changes on opengl that arent commited | 
| 03:02:35 | Skrylar | :| | 
| 03:02:37 | Skrylar | are you kidding me | 
| 03:02:56 | Demos | no, however I did update opengl, so you will want to babel install opengl again | 
| 03:02:58 | Skrylar | if a public generic function calls a private module function, nimrod bitches that it can't access the symbol | 
| 03:03:11 | fowl | Demos, i did update | 
| 03:03:18 | Demos | error? | 
| 03:03:28 | fowl | glcore.nim(57, 22) Error: type mismatch: got (GLuint, GLsizei, GLint, cstring) | 
| 03:03:28 | fowl | but expected one of: | 
| 03:03:28 | fowl | opengl.glGetShaderInfoLog(shader: GLuint, bufSize: GLsizei, length: ptr GLsizei, infoLog: cstring) | 
| 03:03:57 | Varriount|Mobile | Skrylar: Odd, I could've sworn that bug had been fixed. | 
| 03:04:34 | Skrylar | i could try a compiler update | 
| 03:05:30 | Skrylar | nope | 
| 03:05:36 | fowl | Skrylar, it works for me | 
| 03:06:27 | Skrylar | fowl: donno; i'm using the maxrectpack module on my github | 
| 03:06:46 | Skrylar | i did an import, then it complained it couldn't access the rectangle type until i imported it | 
| 03:07:09 | Demos | fowl: what is the md5 of your copy of opengl.nim | 
| 03:07:32 | fowl | 12d8f163912f01ae4bfc8d191c202f55 | 
| 03:07:40 | Demos | 99f... | 
| 03:07:53 | fowl | oh wait thats my local verison | 
| 03:07:54 | fowl | version | 
| 03:08:12 | fowl | 0976e19d0d57c4b463f107b4a6c568b8 | 
| 03:08:20 | Demos | still different | 
| 03:08:28 | Demos | let me reinstall, maybe it is a new regression | 
| 03:08:45 | Skrylar | fowl: did you use a generic proc on a generic type? | 
| 03:08:49 | Demos | also I am not sure if the thing builds on systems other than windows, although I can fix it in a jiff if it does not | 
| 03:09:42 | Demos | oh shit it is | 
| 03:13:00 | Skrylar | bleh | 
| 03:13:11 | Demos | well I fixed those compiler errors, but now I am getting an ICE | 
| 03:13:20 | Skrylar | well i just made all the methods public, tacked "Internal" on to the name, and just accept i have to import everything to get this generic to work | 
| 03:14:33 | fowl | Skrylar, methods cant be private anyways | 
| 03:15:01 | fowl | Skrylar, is the code that wasnt working on github | 
| 03:15:35 | Skrylar | fowl: maxrects internals werent methods | 
| 03:16:08 | Skrylar | TryGet is a proc :P | 
| 03:16:27 | Skrylar | fowl: also yes, its skylights.maxrects, let me get the link | 
| 03:16:49 | fowl | Skrylar, you called it a method, not me | 
| 03:16:54 | Skrylar | https://github.com/Skrylar/Skylight/blob/master/skylight/maxrectpack.nim | 
| 03:17:19 | Skrylar | meh, i already spent ten years making no distinction between proc/function/subroutine/method | 
| 03:19:05 | fowl | lol @ Buttstump() | 
| 03:19:09 | Demos | OK fowl just babel install opengl#f741 after pulling my latest fixes | 
| 03:19:39 | * | ehaliewicz joined #nimrod | 
| 03:20:25 | fowl | :O Error: internal error: getTypeDescAux(tyEmpty) | 
| 03:23:18 | * | DAddYE joined #nimrod | 
| 03:23:28 | Demos | did you babel install opengl#f741? | 
| 03:23:57 | Demos | or just use your local version, that is the hash of the commit that merged your change of things from var to ptr | 
| 03:25:05 | fowl | oh it works with that revision | 
| 03:26:13 | Demos | I have everything set to dynlibOverride, even on non-windows, so that could be a problem | 
| 03:28:38 | fowl | i got a bunch of undefined reference errors | 
| 03:28:51 | Demos | you on windows? | 
| 03:28:54 | Demos | or another system | 
| 03:28:56 | fowl | linux | 
| 03:29:03 | Demos | ugh, sec | 
| 03:29:48 | fowl | how does that work for you? if you static link dont you need the headers | 
| 03:30:12 | Demos | not really, the interface is defined in nimrod code, that is in babel | 
| 03:30:28 | fowl | im going to move the dynliboverride stuff to inside @if windows | 
| 03:30:40 | Demos | just pull again | 
| 03:30:42 | Demos | I did it for you | 
| 03:31:06 | fowl | nimrod.cfg(3, 0) Error: tabulators are not allowed | 
| 03:31:12 | fowl | :D | 
| 03:31:21 | Demos | oh ffs | 
| 03:31:29 | fowl | (what harm would tabs do in .cfg? -_-) | 
| 03:31:36 | * | pragmascript joined #nimrod | 
| 03:32:14 | Demos | I am all for consistency, actually I dont even like the .cfgs but you can not specify dynlibOverride in a code file | 
| 03:33:49 | fowl | hmm Error: unhandled exception: GLX: Failed to create context: GLXBadFBConfig [EGLFW] | 
| 03:34:25 | Demos | glxinfo? | 
| 03:34:33 | Demos | do you have openGL 3.1? | 
| 03:35:03 | Demos | or actually 3.2 as it turns out | 
| 03:35:36 | Demos | I can change it to 3.1, and I do when I am working on my linux box | 
| 03:36:44 | fowl | 3.1 | 
| 03:37:45 | fowl | let me see if the glfw demo app runs | 
| 03:38:47 | * | nande quit (Ping timeout: 265 seconds) | 
| 03:40:14 | Demos | OK it should work now | 
| 03:40:21 | Demos | sorry, have not run it on linux in a while | 
| 03:41:46 | fowl | oh wtf.. cannot open 'glfw/glfw' | 
| 03:46:23 | Demos | sorry... I dont have babel dependencies set up | 
| 03:46:39 | fowl | Demos, this is perplexing | 
| 03:47:13 | Demos | you need nim-glfw, cairo, assimp, freeimage | 
| 03:47:36 | fowl | its not recognizing glfw as being installed | 
| 03:47:59 | Demos | it is seriously not an exciting app at this point, have not had a lot of time to work on it | 
| 03:48:20 | Demos | you have nim-glfw and not nimrod-glfw | 
| 03:48:55 | fowl | yes | 
| 03:51:03 | * | pragmascript quit (Quit: Page closed) | 
| 03:52:40 | * | Jesin joined #nimrod | 
| 03:54:23 | Demos | very strange, perhaps turn on verbose output and see where it is searching | 
| 04:00:04 | fowl | aha | 
| 04:00:15 | fowl | i had the old unversioned nim-glfw still installed | 
| 04:01:46 | fowl | Demos, compiles and runs :) | 
| 04:02:02 | Demos | woooo | 
| 04:02:42 | Demos | not much, there is a more editor friendly camera in there someplace, but I need to make getting it wired up less of a PITA | 
| 04:03:26 | * | Jesin quit (Ping timeout: 265 seconds) | 
| 04:03:47 | fowl | all the pieces are here, please recreate TES3 Morrowind thx | 
| 04:04:21 | fowl | .j #openmw | 
| 04:06:15 | Skrylar | hmm | 
| 04:06:29 | Skrylar | well i get to tweak the maxrect packer again later | 
| 04:06:51 | Skrylar | it has some issues with sorting items in to derpy ways | 
| 04:22:30 | Skrylar | http://imgur.com/GlHr7IO this is apparently how arial looks | 
| 04:40:18 | * | nande joined #nimrod | 
| 04:42:01 | Demos | wait why did I join openmw | 
| 04:43:31 | BitPuffin | Demos: wat | 
| 04:43:50 | Demos | I think fowl caused me to join #openmw | 
| 04:43:51 | reactormonk | gotta say make is useful. | 
| 04:43:56 | Demos | I dont know how irc works... | 
| 04:44:17 | reactormonk | inefficient, but useful. | 
| 04:46:21 | Demos | reactormonk: why not use nake? | 
| 04:47:22 | reactormonk | Demos, because there's more for make ;-) | 
| 04:47:35 | reactormonk | want parallel? -j4 | 
| 04:50:06 | Demos | I really like the fact that nimrod does not require any kind of external build system | 
| 04:50:27 | reactormonk | Demos, I fully agree with that | 
| 04:50:38 | reactormonk | Demos, I'm using make for some NLP stuff | 
| 04:50:42 | Demos | I dont even like the nimrod.cfg files :D | 
| 04:58:03 | * | darithorn quit (Ping timeout: 265 seconds) | 
| 05:22:08 | * | BitPuffin quit (Read error: Operation timed out) | 
| 05:24:26 | Skrylar | i donno, i prefer ninjabuild or tup to make -j4 | 
| 05:24:30 | Skrylar | ninja is actually good at it | 
| 05:24:59 | fowl | Demos, did i? | 
| 05:25:03 | fowl | .join #fancyfeast | 
| 05:25:11 | fowl | .j #fancyfeast | 
| 05:25:25 | fowl | it didnt work :< | 
| 06:01:39 | * | BitPuffin joined #nimrod | 
| 06:03:06 | BitPuffin | Demos: thought you meant that you became part of the openmw project lol | 
| 06:03:45 | Demos | no, although I have thought about contributing something to them from time to time | 
| 06:06:00 | BitPuffin | Demos: I like how it's ugly as fuck xD | 
| 06:06:21 | Demos | what? openmw | 
| 06:06:59 | BitPuffin | yeah | 
| 06:07:06 | BitPuffin | it's like | 
| 06:07:15 | BitPuffin | replacing a game that looks old with a game that looks old | 
| 06:07:19 | BitPuffin | but I guess they will update the assets | 
| 06:07:33 | Demos | they will probably steal the assets from skywind | 
| 06:07:34 | BitPuffin | they are probably mostly working on tech | 
| 06:07:44 | Demos | but they aim for compat with the original game data | 
| 06:07:57 | Demos | I am a bit hesetant to deal with OGRE though | 
| 06:07:58 | BitPuffin | are skywind assets CC? | 
| 06:08:06 | BitPuffin | yeah ogre is bullshit | 
| 06:08:24 | Demos | I dun know, some of them are probably (c) bethesda (from skyrim) | 
| 06:09:55 | * | ehaliewicz quit (Remote host closed the connection) | 
| 06:09:58 | * | nande quit (Read error: Connection reset by peer) | 
| 06:32:01 | Skrylar | .. openmw uses the original assets | 
| 06:32:03 | Skrylar | from the asset files | 
| 06:32:23 | Skrylar | i bet zenimax is waiting until you can play it 100% before they sue the shit out of them | 
| 06:35:53 | fowl | its not illegal to use the assets, its illegal to distribute or modify them | 
| 07:03:12 | * | Demos quit (Remote host closed the connection) | 
| 07:07:52 | * | Demos joined #nimrod | 
| 07:08:32 | Araq | dom96: good news, I found out why async creates random crashes | 
| 07:08:46 | Araq | it's all your fault of course :P | 
| 07:09:25 | fowl | dom96, he found your if random()==0: crash() line | 
| 07:11:14 | Araq | I'll patch the compiler to produce more warnings | 
| 07:11:46 | Araq | as usual more static analysis would have prevented it ... | 
| 07:29:05 | * | DAddYE quit (Remote host closed the connection) | 
| 07:29:32 | * | DAddYE joined #nimrod | 
| 07:33:53 | * | DAddYE quit (Ping timeout: 252 seconds) | 
| 07:40:50 | * | Demos quit (Read error: Connection reset by peer) | 
| 08:15:25 | Skrylar | i doubt it, but i'm curious if it would be possible to compress data by making a rectangle out of six bytes, then storing the starting coord and direction | 
| 08:16:20 | Skrylar | i'll have to think about it later; there are probably too many edge cases for that to be worth it | 
| 08:40:59 | * | bjz quit (Ping timeout: 252 seconds) | 
| 08:54:04 | * | superfunc joined #nimrod | 
| 09:01:09 | * | bjz joined #nimrod | 
| 09:07:56 | * | bjz quit (Ping timeout: 265 seconds) | 
| 09:15:48 | * | superfunc quit (Ping timeout: 240 seconds) | 
| 09:36:19 | Varriount|Mobile | Hm. Anyone read the latest forum thread? | 
| 09:36:50 | Araq | I replied. | 
| 09:39:44 | Varriount|Mobile | Yeah. Do you know why the original implementation of split is slow? | 
| 09:40:05 | Araq | who says it's slow? | 
| 09:40:17 | Araq | it's  likely GC pressure | 
| 09:40:44 | Araq | but without profiling nobody knows | 
| 09:41:22 | Skrylar | it still makes me sad that there isn't a profiling option in the compiler :< | 
| 09:41:31 | Araq | --profiler:on ? | 
| 09:41:42 | Skrylar | i didn't see it when i did nimrod --advanced | grep profile | 
| 09:41:43 | Skrylar | :\ | 
| 09:42:02 | Varriount|Mobile | Araq: When you take twice the time an interpreted, known-to-be-inefficient language does, you're slow. | 
| 09:43:57 | Skrylar | well theres the problem | 
| 09:44:03 | Araq | Varriount|Mobile: yes. so profile it. | 
| 09:44:04 | Skrylar | the profiler is mentioned in a separate help page but not --help | 
| 09:44:45 | Varriount|Mobile | Araq: With the built in profiler, or some C profiler? | 
| 09:45:08 | Araq | whatever gives you some meaningful numbers | 
| 09:45:08 | Skrylar | o_o | 
| 09:45:19 | Skrylar | timers.nim:32: unhandled exception: value out of range | 
| 09:49:40 | * | faassen joined #nimrod | 
| 09:54:34 | Skrylar | Araq: i thought the performance counter in windows was susceptible to multi-core stupidity | 
| 09:56:05 | Araq | Skrylar: do we use the performance counter of windows? | 
| 09:56:12 | Skrylar | yup | 
| 09:56:22 | Skrylar | lib/system/times.nim: QueryPerformanceCounter | 
| 09:56:59 | Skrylar | also it seems like an EOverflowError happens and makes nimprof fail to actually output any profile data | 
| 09:57:17 | Araq | aha | 
| 09:57:24 | Araq | well please fix it | 
| 09:57:31 | Skrylar | lol | 
| 09:57:35 | Skrylar | i have no idea why its doing that :P | 
| 09:59:12 | Araq | bbl | 
| 11:25:59 | EXetoC | how does ogre compare strictly graphics-wise to other engines? hopefully they chose one that was sufficiently good. maybe they want to be as open as possible | 
| 11:26:34 | EXetoC | Skrylar: a 6-byte representation of a rectangle? for what domain? | 
| 11:30:10 | EXetoC | alright "linux sucks" 2014 is out | 
| 11:31:38 | dom96 | Varriount: nice job getting Nimrod mentioned at the top of the comments here http://www.reddit.com/r/programming/comments/247989/cc_leaving_go/ :) | 
| 11:32:59 | EXetoC | hm, openssl fork called libressl? | 
| 11:45:53 | * | menscrem joined #nimrod | 
| 11:48:14 | Skrylar | EXetoC: ogre's graphics are as good as one can expect, really | 
| 11:48:48 | Skrylar | there's not really "secret tech" that makes better graphics than any other engine, they're all mostly based on the same siggraph papers ultimately | 
| 11:56:16 | Araq | hi menscrem welcome | 
| 11:56:41 | menscrem | hi all | 
| 12:02:07 | Araq | hmm this "injectStmt" is really much better than ENDB | 
| 12:02:16 | Araq | somebody should write a blog post about it | 
| 12:04:39 | Araq | maybe we should deprecate ENDB in favour of injectStmt | 
| 12:06:41 | EXetoC | there are no related limitations? | 
| 12:07:01 | Araq | well it works completely differently | 
| 12:07:11 | Araq | but captures the essence of what ENDB provides | 
| 12:16:06 | * | chezduck joined #nimrod | 
| 12:20:47 | * | Varriount|Mobile quit (Remote host closed the connection) | 
| 12:21:04 | * | Varriount|Mobile joined #nimrod | 
| 12:35:42 | EXetoC | what about stepping etc? I suppose it might not be that hard to implement while assuming that it'll apply to only one module at a time | 
| 12:37:27 | EXetoC | and then you'd need the ability to apply it to a series of modules without much effort | 
| 12:40:36 | EXetoC | libuv.nim has "cssize = int" and I stumbled upon ssize_t recently when wrapping mongodb. I guess we should have that next to the declaration of csize | 
| 12:47:14 | Varriount|Mobile | EXetoC: Regarding the comments in that reddit thread, I felt kinda cheap just namedropping like that. :/ | 
| 12:47:34 | Varriount|Mobile | *dom96 | 
| 12:49:19 | EXetoC | it's just a general question. I don't mind | 
| 12:49:58 | EXetoC | no claims of superiority or anything :p | 
| 13:02:31 | Varriount | Hi chezduck | 
| 13:05:27 | * | chezduck quit (Quit: Leaving) | 
| 13:19:09 | * | [2]Endy joined #nimrod | 
| 13:27:49 | * | brihat joined #nimrod | 
| 13:27:56 | * | brihat quit (Client Quit) | 
| 13:28:00 | * | [1]Endy joined #nimrod | 
| 13:28:42 | * | darkf quit (Quit: Leaving) | 
| 13:31:23 | * | [2]Endy quit (Ping timeout: 252 seconds) | 
| 13:37:27 | Varriount | It seems that a large part of what is slowing that ruby vs nimrod example down is GC allocation. | 
| 13:41:19 | OrionPK | ruh roh | 
| 13:48:10 | * | [1]Endy quit (Ping timeout: 252 seconds) | 
| 13:52:43 | * | [1]Endy joined #nimrod | 
| 13:58:48 | Araq | I have always said it's tuned for responsiveness and not for throughput | 
| 13:59:08 | Araq | --gc:markAndSweep can be faster | 
| 13:59:31 | Araq | but for these benchmarks nothing beats copying collector | 
| 14:00:39 | Araq | so ... it doesn't surprise me at all | 
| 14:03:56 | * | bjz joined #nimrod | 
| 14:12:14 | Varriount | Araq: Well, at least the example doesn't suddenly pause while splitting stuff. | 
| 14:20:48 | * | nande joined #nimrod | 
| 14:36:36 | * | darithorn joined #nimrod | 
| 14:37:57 | * | Varriount|Mobile quit (Remote host closed the connection) | 
| 15:01:52 | * | Jehan_ joined #nimrod | 
| 15:06:47 | * | BitPuffin quit (Ping timeout: 240 seconds) | 
| 15:07:38 | * | bjz quit (Ping timeout: 240 seconds) | 
| 15:30:17 | * | Skrylar quit (Ping timeout: 264 seconds) | 
| 15:44:44 | * | bjz joined #nimrod | 
| 15:51:17 | * | bjz quit (Ping timeout: 252 seconds) | 
| 15:56:52 | * | Matthias247 joined #nimrod | 
| 16:00:27 | * | untitaker quit (Ping timeout: 252 seconds) | 
| 16:02:08 | * | Demos joined #nimrod | 
| 16:02:38 | * | DAddYE joined #nimrod | 
| 16:05:31 | * | Jesin joined #nimrod | 
| 16:05:56 | * | untitaker joined #nimrod | 
| 16:07:13 | Araq | hi DAddYE wb | 
| 16:08:55 | Varriount | Araq: In that post about string splitting, what did you mean about things like your provided fast split not being needed when the stdlib uses TR macros? | 
| 16:09:26 | Araq | well TR macros can cure cancer | 
| 16:09:48 | Araq | but I meant your s[a..b] == split  check | 
| 16:09:57 | Araq | *sep | 
| 16:10:40 | Araq | it's the reason why I allowed split for string seps to be written this way in the stdlib | 
| 16:11:19 | * | mac01021_ left #nimrod (#nimrod) | 
| 16:11:40 | Araq | surprising how quickly somebody found out it's slow :-) | 
| 16:15:55 | Varriount | Araq: Wait, so what needs to be modified for string splitting to be faster? | 
| 16:16:32 | Araq | s[a..b] == sep  needs to be optimized to not construct a temporary string | 
| 16:16:56 | Araq | like Jehan_ showed in his answer | 
| 16:18:21 | Araq | also our "lines" iterator could use a bigger buffer | 
| 16:18:38 | * | BitPuffin joined #nimrod | 
| 16:19:39 | Demos | I was just so impressed with lines and fileLines and nimrod's other easy IO stuff, coming from c++ where it is either the more or less sane cstdio or the totally insane iostreams it was really nice | 
| 16:23:49 | menscrem | where you can see an example of lambda functions? | 
| 16:26:58 | Demos | proc(x: int) = echo x <--- Right here! (and in the manual) | 
| 16:27:03 | Demos | there is a nice syntax as well | 
| 16:28:24 | menscrem | Thank you very much | 
| 16:39:13 | * | seagoj joined #nimrod | 
| 16:51:55 | * | Matthias247 quit (Read error: Connection reset by peer) | 
| 17:01:28 | * | brson joined #nimrod | 
| 17:03:30 | * | brson_ joined #nimrod | 
| 17:05:47 | * | brson quit (Read error: Connection reset by peer) | 
| 17:09:55 | Jehan_ | The primary reason why it's "slow" is really only the allocation/initialization/deallocation of the result, by the way (ran it through a profiler). Once you fix strutils.split, that is. Even GC has practically no impact on the performance in the example. | 
| 17:11:29 | Varriount | Jehan_: This is embarassing. I'm the one who wrote the implementation of split that accepts strings as a seperator. >_< | 
| 17:12:33 | Varriount | Jehan_: I Don't suppose you could open a PR for the improvement? | 
| 17:12:35 | OrionPK | live and learn | 
| 17:12:56 | Jehan_ | It happens. :) I'm pretty sure I've written worse. I remember when I wrote the library for my Eiffel compiler years ago, I forgot to null out array entries when removing entries from an array, which eventually led to code running out of memory. | 
| 17:13:29 | Jehan_ | I probably could, once I get a chance to review that I didn't screw up anything else in the process. | 
| 17:14:09 | dom96 | Varriount: Are you sure? I'm pretty sure I wrote it lol | 
| 17:15:04 | Jehan_ | Which reminds me that find(string, string) could probably use an improvement too for not constructing the BM table (at least I think it's BM?) for short strings. | 
| 17:16:13 | Jehan_ | But really, other than optimizing that, the price for getting "C-like" performance is also exposing yourself to "C-like" bugs like Heartbleed. | 
| 17:16:45 | Varriount | dom96: I might be mistaken. | 
| 17:16:56 | Demos | well imo the whole point of nimrod is to default to safe and maybe slightly slower code, and let you opt-out when you see things on a profile | 
| 17:18:37 | Jehan_ | Demos: Yeah. Incidentally, you can even replace the copymem() call in Araq's version with a safe for loop without sacrificing performance (at least on my machine). Well, safe as long as you don't disable bound checks. :) | 
| 17:46:58 | Demos | I kinda doubt that bounds checks will slow down much real code, it is just impossible to do em in C | 
| 17:47:04 | Demos | maybe /some/ real code | 
| 17:47:06 | Demos | but still | 
| 17:50:11 | EXetoC | you can disable it when it really matters | 
| 17:51:10 | EXetoC | but yeah, fortunately you don't have to in many cases | 
| 17:52:02 | Varriount | Eh.. dom96? | 
| 17:52:02 | Jehan_ | You can try it, it doesn't. | 
| 17:53:57 | Jehan_ | Branch prediction + caching eliminates most of the overhead. And that's for dynamic arrays/strings, for fixed length arrays, the C compiler can often eliminate the check. | 
| 17:54:47 | Varriount | dom96: You should look at the newDispatcher proc in asycdispatch.nim . The windows implementation creates  new IO completion port each call, which won't work if multiple parts of the stdlib call it. | 
| 17:55:52 | Jehan_ | var a = @[0,1] | 
| 17:55:52 | Jehan_ | for i in 0 .. <1_000_000_000: | 
| 17:55:52 | Jehan_ | a[i and 1] = i | 
| 17:55:54 | Demos | yeah, I would be a tad concerned with stuff like overflow checks, but even then | 
| 17:56:15 | EXetoC | I said *when* it does, and yes I'm sure it very rarely does | 
| 17:56:18 | Jehan_ | Runs in 0.95 seconds with -d:release for me, 1.26 seconds with -d:memsafe (which is -d:release except bound checks). | 
| 17:56:46 | Jehan_ | Make it an array instead of a seq, and performance is identical. | 
| 17:57:09 | EXetoC | well there you go | 
| 17:57:23 | reactormonk | I suppose that speaks for seq | 
| 17:58:40 | Jehan_ | It mostly speaks for modern C compilers and processors, I think. :) | 
| 17:58:41 | EXetoC | the loop indices are statically known though | 
| 17:59:19 | EXetoC | right, no --opt. does that propagate to the C compiler? | 
| 17:59:34 | reactormonk | I get 1.2s for -d:release and 13s for -d:memsafe | 
| 17:59:55 | Jehan_ | memsafe is not a default define, that's in my custom nimrod.cfg. | 
| 18:00:03 | reactormonk | -.- | 
| 18:00:10 | Jehan_ | You're running with all checks, stack traces, etc. enabled. | 
| 18:00:39 | reactormonk | ok, 1.6s | 
| 18:00:54 | Jehan_ | I find "disable all checks except bound checks and optimize" useful enough that I gave it its own define. | 
| 18:01:46 | reactormonk | create a few benchmarks, create all combinations and I can run a regression and tell you how much each parameter affects your code ;-) | 
| 18:02:14 | Varriount | I wonder what the most slowing debug option. | 
| 18:02:24 | Varriount | *is | 
| 18:02:35 | reactormonk | Varriount, gimme timing, I'll give you data | 
| 18:02:58 | Varriount | Oooo goody! | 
| 18:03:05 | * | brson_ quit (Ping timeout: 252 seconds) | 
| 18:03:12 | Jehan_ | Honestly, I mostly run just with --opt:speed and all checks and tracing enabled most of the time. | 
| 18:03:32 | Jehan_ | People seriously worry too much over speed. | 
| 18:03:32 | reactormonk | CPUs are fickle, your code might be faster with --opt:size | 
| 18:04:11 | Jehan_ | OCaml generates bytecode by default and it's fast enough. I wrote an Eiffel compiler in Ruby 10 years ago, and it was fast enough (on the hardware that existed then). | 
| 18:04:28 | Varriount | Jehan_: I seriously considered compiling the Nimrod release builds that are hosted on the website with --stackTrace:on | 
| 18:04:36 | reactormonk | ohh, ruby 10 years ago, welcome to the hipster club ;-) | 
| 18:04:38 | Jehan_ | Unless you're doing multimedia or HPC, you probably don't need to go crazy. | 
| 18:05:13 | Jehan_ | Not really, I used Ruby because I couldn't use Smalltalk. Ruby was the closest thing I could get. :) | 
| 18:05:19 | reactormonk | Jehan_, or NLP / machine learning. | 
| 18:05:44 | reactormonk | Everything is slow if you scale it up by a few orders of magnitude. | 
| 18:06:10 | reactormonk | Semantics model? Oh, you only have 100m words? NEWB! | 
| 18:06:16 | Jehan_ | Yeah, but most people seriously underestimate how much raw power modern CPUs pack. | 
| 18:06:41 | * | brson joined #nimrod | 
| 18:06:53 | Jehan_ | But then, I started programming in Z80 assembly language, so my view may be a bit skewed. :) | 
| 18:07:02 | Demos | and in nimrod we already avoid lots of heap allocations and tend to use dense arrays | 
| 18:07:19 | reactormonk | Never heard of. My first language was Ruby :-/ | 
| 18:07:22 | Jehan_ | Demos: Eh. That can be good and bad. | 
| 18:07:40 | reactormonk | Demos, well, I need parse arrays in NLP :-( | 
| 18:07:43 | Jehan_ | E.g., WTB hash tables that are references. I mean, holy copying overhead, Batman. :) | 
| 18:08:00 | reactormonk | *sparse | 
| 18:08:28 | Jehan_ | Which reminds me that I still need to submit my PR for better collision handling for them. | 
| 18:09:01 | Demos | Jehan_: you can new up a hash table, at least I think you can | 
| 18:09:37 | reactormonk | Demos, new up? | 
| 18:09:52 | Jehan_ | Demos: I have my own module that wraps them in a ref and has proxy methods for [] etc. | 
| 18:09:57 | dom96 | Varriount: Yeah. And newDispatcher is only called once. | 
| 18:10:04 | Demos | use new to make a new one | 
| 18:10:18 | Demos | ideally we could use [] on a ref TTable without wrapping | 
| 18:10:56 | EXetoC | I haven't seen a constructor that allocates dynamically | 
| 18:11:11 | Demos | sometimes I use ptr TTables for this kind fo stuff when I am not keeping the alias around | 
| 18:11:15 | EXetoC | I asked about allocation-agnostic construction before, and we'll get that some time | 
| 18:11:15 | Demos | EXetoC: so? | 
| 18:11:30 | EXetoC | you said there was one | 
| 18:11:31 | Demos | I use a function that I call new(initWhatever) | 
| 18:12:04 | Demos | takes a TWhatever and just copies it over | 
| 18:13:12 | Varriount | dom96: What happens if I associate a both a directory handle and a socket with the global dispatcher, and try to get data from the socket? | 
| 18:13:50 | dom96 | Varriount: It'll work. | 
| 18:17:09 | EXetoC | Demos: yeah ok that's fine in most cases | 
| 18:17:34 | dom96 | Jehan_: Nice to see you in IRC finally. Have you written anything interesting in Nimrod yet? | 
| 18:18:05 | Jehan_ | I've writtten quite a few things over the past year, and may even be able to make them public someday. | 
| 18:18:06 | Demos | yeah, oh also EXetoC, my code fails to compile with your opengl changes, I have no idea why. It actually gets an ICE | 
| 18:18:46 | Jehan_ | Not sure if they're of interest to anybody, though, they're largely tools for a very specific application. | 
| 18:19:55 | Jehan_ | dom96: I work on the parallelization of computer algebra systems. I am using Nimrod to check/transform C/C++ source code for these, inter alia. | 
| 18:20:15 | Jehan_ | Which is very, very specific to these systems. | 
| 18:20:26 | reactormonk | c2nim on steriods? ^^ | 
| 18:20:58 | Jehan_ | Not really c2nim, though it does have to parse C (and a subset of C++). | 
| 18:21:19 | dom96 | Jehan_: That sounds interesting, but way over my head heh. | 
| 18:22:36 | Jehan_ | But it's more about semantic analysis of the code, including dataflow analysis to optimize code instrumentation. | 
| 18:25:51 | reactormonk | Jehan_, so a compiler plugin +/-? | 
| 18:26:09 | Jehan_ | No, a C/C++ analyzer/preprocessor. | 
| 18:26:24 | reactormonk | you're modifing code or only analyzing it? | 
| 18:26:25 | Jehan_ | Closer to what CIL does. | 
| 18:26:41 | Jehan_ | Both, depending on what I need (verification or instrumentation). | 
| 18:26:42 | EXetoC | Demos: you might have noticed the inclusion of a macro, which is related to some features that haven't been documented on github | 
| 18:27:31 | EXetoC | I never got an ICE. I don't know if it helps to define the symbol that omits the runtime checks | 
| 18:28:10 | EXetoC | my game still compiles | 
| 18:28:46 | Demos | well mine does not, so I am sticking with a previous commit of opengl. | 
| 18:29:35 | * | EXetoC quit (Quit: WeeChat 0.4.3) | 
| 18:29:57 | * | EXetoC joined #nimrod | 
| 18:30:03 | EXetoC | oops wrong keyboard shortcut | 
| 18:30:13 | EXetoC | try with -d:noAutoGlErrorCheck if you want | 
| 18:31:15 | EXetoC | which doesn't actually omit the macro itself. it should though, so I'll notify you when that's fixed | 
| 18:33:35 | EXetoC | but then again most people will probably want to use that feature, and the ICE is a compiler thing | 
| 18:35:24 | Demos | the glError calls are turned off in release mode right? | 
| 18:35:26 | EXetoC | alternatively just comment out "wrapErrorChecking:" and de-indent line 359-3176 ;) | 
| 18:37:17 | EXetoC | Demos: no. the opengl errors do not only catch programmer errors. you can disable that either by defining that symbol (compile-time), or by executing this: "enableAutoGlErrorCheck(false)" | 
| 18:39:07 | * | seagoj quit (Quit: Textual IRC Client: www.textualapp.com) | 
| 18:40:08 | EXetoC | this is explained in the doc comments | 
| 18:40:13 | Demos | well glGetError can sometimes cause flushes and synchs | 
| 18:41:20 | * | Raynes quit (Max SendQ exceeded) | 
| 18:42:16 | EXetoC | I never thought of that. it's your choice though like I said | 
| 18:42:42 | Demos | yeah | 
| 18:46:16 | * | faassen quit (Remote host closed the connection) | 
| 18:50:05 | * | Matthias247 joined #nimrod | 
| 18:58:16 | Demos | can I have a "zero length" seq, where like I can "iterate" without crashing but it is a nop | 
| 19:00:14 | EXetoC | yes. problems only arise for uninitialized seq's, though not as often now with the safe semantics of add etc | 
| 19:00:35 | Araq | er .. these semantics have not been implemented yet | 
| 19:00:53 | Demos | right, but how do I make a zero length seq, newSeq does not seem to do the trick, I guess I will just use @[] | 
| 19:01:15 | EXetoC | I thought someone said that before. ok well the rest should be true | 
| 19:01:59 | EXetoC | that's the same thing isn't it? | 
| 19:02:18 | Demos | wait never mind, I was using the wrong seq | 
| 19:03:07 | EXetoC | IRC is a great substitute for rubber duck debugging | 
| 19:10:10 | * | foxcub joined #nimrod | 
| 19:11:32 | * | gsingh93__ joined #nimrod | 
| 19:12:35 | * | gsingh93__ is now known as gsingh93 | 
| 19:12:42 | foxcub | What’s the right way to contribute something to the standard library? | 
| 19:16:02 | Varriount | foxcub: Fork the nimrod repository, create your own local branch for the contribution, add the contribution to your local branch, and then send a Pull request to pull in the feature into the main nimrod repository. | 
| 19:16:24 | Varriount | If you need help with any of that, I should be around for another 45-60 minutes. | 
| 19:16:30 | EXetoC | possibly including a test, if it appears to be necessary | 
| 19:16:43 | foxcub | I see. So a pull request on Github is the right way to go? | 
| 19:16:52 | Varriount | foxcub: Yep! | 
| 19:20:00 | Varriount | dom96: How do you think fsmonitor should handle the event buffers? | 
| 19:20:57 | * | menscrem quit (Quit: Page closed) | 
| 19:22:15 | foxcub | I wonder if I can get some feedback on whether the code would be desired in the standard library, and what may be missing for its proper inclusion? | 
| 19:22:34 | Varriount | foxcub: Ask away. | 
| 19:22:38 | foxcub | https://bitbucket.org/mrzv/heap.nim | 
| 19:22:51 | foxcub | It’s a mutable pairing heap implementation. | 
| 19:23:08 | foxcub | Useful for priority queues. | 
| 19:23:39 | Varriount | Hm. If I recall correctly, Python has a Heap module... | 
| 19:23:46 | dom96 | Varriount: Are the event buffers the place where readDirectoryChanges stores the events? | 
| 19:24:14 | Varriount | dom96: Yes. Specifically, where the OS part of readDirectoryChanges stores the events. | 
| 19:24:41 | Varriount | In Windows, if the buffer overflows, it just gets cleared. | 
| 19:25:07 | foxcub | Python’s heaps are a little simpler. They are just straight up “array heaps”. In particular, it’s not mutable. | 
| 19:25:15 | dom96 | Varriount: I think you should alloc0 it, but I can't be sure. | 
| 19:25:35 | Varriount | foxcub: It looks like a good candidate, although I would have to get approval from Araq before including it in the standard library. | 
| 19:25:51 | foxcub | Sure. I’m more curious what’s missing. | 
| 19:25:57 | foxcub | I’m guessing I need to add some docs. | 
| 19:26:04 | Varriount | foxcub: Documentation? | 
| 19:26:04 | foxcub | But content-wise? | 
| 19:26:10 | Demos | the colors module could use support for alpha | 
| 19:26:17 | foxcub | Varriount: right, what else? | 
| 19:26:18 | dom96 | foxcub: It looks like we already have a linked list implementation: http://nimrod-lang.org/lists.html | 
| 19:26:26 | foxcub | dom96: it was missing things. | 
| 19:26:27 | Demos | that is an easy change as well | 
| 19:26:54 | dom96 | foxcub: why not extend it? | 
| 19:27:22 | foxcub | dom96: ah, well, back when I did it because I wasn’t planning on modifying the standard library. But now, I see your point. | 
| 19:34:55 | Varriount | dom96... Why did you place the Windows structures, like TOverlapped, in the asyncdispatch module? | 
| 19:36:01 | Varriount | Or maybe I'm misreading.. | 
| 19:40:53 | reactormonk | foxcub, we have babel, so modifing the stdlib isn't required unless you have to go deep into the compiler | 
| 19:41:53 | dom96 | Varriount: why not? | 
| 19:42:04 | foxcub | reactormonk: I see. My logic was that something as fundamental as priority queues belongs in the standard library (it’s a pretty fundamental data structure). But if it’s not a good idea, I’d rather not invest the time in preparing a pull request. | 
| 19:42:37 | Varriount | dom96: Because I might need some of those structures in a non-asyncio capacity. :/ | 
| 19:42:52 | Varriount | There's a reason we have winlean ya know. | 
| 19:43:22 | dom96 | Varriount: When would you need them in a non-asyncio capacity? | 
| 19:43:55 | Varriount | dom96: Well, for defining the ReadDirectoryChanges procedure, for one thing. | 
| 19:44:31 | Varriount | It has a bunch of arguements, one of which is an overlapped thingie. | 
| 19:44:42 | dom96 | As far as I'm concerned TOverlapped only makes sense in a asyncio capacity. | 
| 19:45:18 | dom96 | So you'd rather import asyncdispatch and winlean instead of just asyncdispatch? | 
| 19:45:31 | Varriount | dom96: It should still be in winlean or windows.nim . That's where people expect the window api wrappers to be. | 
| 19:45:45 | Varriount | It says so on the tin. | 
| 19:47:23 | * | [1]Endy quit (Ping timeout: 265 seconds) | 
| 19:48:26 | dom96 | Varriount: We have a TCustomOverlapped though, not a TOverlapped. | 
| 19:48:36 | dom96 | It makes no sense to put it in winlean when it's asyncdispatch specific. | 
| 19:49:43 | Varriount | And TCompletionData? | 
| 19:50:11 | dom96 | That's custom too. | 
| 19:50:23 | Varriount | Oh. Point taken then. | 
| 19:51:45 | reactormonk | foxcub, we're trying to ship stuff out of the stdlib whenever possible | 
| 19:52:17 | Varriount | reactormonk: Yeah, but if what foxcub has to offer improves the linked list stuff... | 
| 19:52:33 | Varriount | And also trumps what's in Python's stdlib... | 
| 19:52:34 | reactormonk | Varriount, ah, ok. Then keep going with the PR. | 
| 19:52:36 | foxcub | Varriount: the linked list changes are trivial at best. | 
| 19:52:49 | foxcub | The real issue are heaps. | 
| 19:53:20 | foxcub | Currently, they are missing from Nimrod alltogether. I’m suggesting to add something that’s more powerful than languages usually provide. | 
| 19:53:22 | Varriount | foxcub: Ultimately, it's up to Araq, he's the BDFL | 
| 19:53:27 | foxcub | Right | 
| 19:53:35 | foxcub | I should ask him before I invest the effort. | 
| 19:53:45 | Varriount | We are just his mere servents/annoyances. :3 | 
| 19:53:57 | reactormonk | with commit rights >:) | 
| 19:56:31 | fowl | 10 points to whoever can tell me how this is parsed: `not present(R).matcher(input,start).has` | 
| 19:57:30 | reactormonk | fowl, not sure if the not has priority | 
| 19:58:11 | fowl | reactormonk, it does | 
| 19:58:34 | fowl | only `@` has higher precedence than other unary ops (so that @[1,2,3].join(",") works) | 
| 20:03:22 | Araq | hu?  not a.b  is certainly   not (a.b) | 
| 20:03:34 | Araq | and not  (not a).b | 
| 20:03:41 | Araq | the dot is special | 
| 20:03:51 | Araq | it's no operator in the grammar | 
| 20:04:44 | fowl | i always get high/low precedence mixed up | 
| 20:05:13 | fowl | @a.b is (@a).b, so is that high precedence or low? o.o | 
| 20:05:59 | Varriount | Araq: foxcub wants to ask you something. | 
| 20:06:23 | foxcub | Oh, hey, Araq | 
| 20:06:47 | foxcub | I was contemplating adding a mutable priority queue to the standard library. Is it worth doing? | 
| 20:06:48 | Araq | hi foxcub welcome | 
| 20:07:02 | foxcub | Here’s the implementation: https://bitbucket.org/mrzv/heap.nim | 
| 20:07:11 | Araq | I dunno. depends on wether it's ready to be multi threaded | 
| 20:07:14 | foxcub | (I’d obviously need to add the docs.) | 
| 20:07:35 | foxcub | No. Concurrent _mutable_ priority queue is a complicated task. | 
| 20:07:50 | foxcub | In fact, I don’t know of a way to do it other than as a skip list. | 
| 20:08:07 | foxcub | A concurrent skip list would be a worthy addition for many reasons, but I don’t have one. | 
| 20:08:42 | foxcub | (In fact, I don’t know of any language that provides a concurrent mutable priority queue. Do you?) | 
| 20:09:00 | foxcub | Things like TBB for C++ provide non-mutable implementations. | 
| 20:09:38 | foxcub | (As a full disclosure: I’m not an expert in the subject.) | 
| 20:11:01 | Araq | bah, skip lists... I don't like them | 
| 20:11:08 | foxcub | Oh, why not? | 
| 20:11:12 | foxcub | Beautiful structures. | 
| 20:11:25 | Araq | too complex plus I don't really see the point | 
| 20:11:30 | foxcub | Super useful for all sorts of concurrent data structures. | 
| 20:11:54 | foxcub | Probably, the best way to do a comparison-based associative container. | 
| 20:11:54 | Araq | it's a linked list structure that requires fucking variable sized arrays | 
| 20:12:05 | Araq | that can't be right :P | 
| 20:12:11 | foxcub | But it buys you a lot. | 
| 20:12:25 | foxcub | For a concurrent implementation, I think it is. | 
| 20:12:32 | foxcub | (Obviously not for serial.) | 
| 20:12:58 | foxcub | Also emphasis being on comparison-based. Hash tables win if that’s not an issue. | 
| 20:13:34 | fowl | foxcub, you dont need to cast to PListNode[T], you can use normal type conversion like PListNode[T](item) | 
| 20:13:53 | foxcub | Well, if heap goes into the standard library, all that linked list stuff will go out the window. | 
| 20:13:55 | fowl | foxcub, that way you would get an exception for mismatched types | 
| 20:14:09 | foxcub | I see. that’s useful to know. Thanks. | 
| 20:14:48 | foxcub | Araq: also, what’s with the desire for a multi-threaded code? None of the collections already in the standard library are. | 
| 20:14:54 | foxcub | Or am I wrong on that? | 
| 20:15:13 | Varriount | I think we have a lock free hash table implementation. | 
| 20:15:23 | foxcub | Ah, that’s great. I didn’t know that. | 
| 20:15:54 | foxcub | Varriount: which module? | 
| 20:16:22 | foxcub | What primitive is it using to avoid locks? | 
| 20:16:46 | foxcub | Varriount: never mind, found it. | 
| 20:16:49 | fowl | foxcub, nice readable code btw | 
| 20:16:57 | foxcub | fowl: thanks | 
| 20:17:02 | Varriount | I have to go. Calculus class is coming up. | 
| 20:20:00 | Demos | foxcub: indeed quite nice, and the pairing heap is a somewhat nasty datastructure to implement :D | 
| 20:20:21 | foxcub | Demos: thanks. Why nasty? | 
| 20:21:56 | * | Matthias247_ joined #nimrod | 
| 20:22:22 | * | Raynes joined #nimrod | 
| 20:24:53 | * | Matthias247 quit (Ping timeout: 264 seconds) | 
| 20:26:36 | Demos | just because it is a tree and it can be subtle. I implemented it in c++ which is harder because you do not get a garbage collector | 
| 20:29:19 | foxcub | I see. | 
| 20:34:18 | Araq | foxcub: both head and tail return 'lst.header' | 
| 20:34:48 | foxcub | Yeah | 
| 20:35:10 | foxcub | The list ties back on itself through the header. | 
| 20:36:13 | Araq | well it's confusing | 
| 20:36:19 | foxcub | Again, the linked list stuff would go out if I put it into the standard library. I actually don’t remember any more what was missing. | 
| 20:36:50 | foxcub | But saves on the overhead. | 
| 20:36:54 | foxcub | If you have lots of tiny lists. | 
| 20:40:12 | Araq | you know what would be much more useful right now? a gap array | 
| 20:40:32 | foxcub | Why? | 
| 20:40:37 | Araq | or something else that supports fast interval checks | 
| 20:40:59 | foxcub | What do you mean? | 
| 20:41:11 | Araq | for the interior pointer checking in the GC | 
| 20:41:54 | foxcub | I’m not familiar with that at all. What are the operations that you actually need? | 
| 20:41:57 | * | DAddYE quit (Remote host closed the connection) | 
| 20:42:15 | * | DAddYE joined #nimrod | 
| 20:42:27 | fowl | when i use -d:useNimRtl how does it find the rtl dll? | 
| 20:42:39 | Araq | x in [a..b,, c..d, e..f, ...] | 
| 20:42:51 | * | nande quit (Read error: Connection reset by peer) | 
| 20:42:54 | Araq | and if yes, in which interval it is | 
| 20:43:15 | foxcub | What does this have to do with gap arrays? Why not something like an interval tree? | 
| 20:43:20 | Araq | number of intervals: ~7000 or more | 
| 20:43:34 | foxcub | Are all the intervals disjoint? | 
| 20:43:37 | Araq | yes | 
| 20:43:40 | foxcub | Ah, I see. | 
| 20:44:01 | Araq | number of 'in' checks we need to perform: thousands up to millions | 
| 20:44:10 | foxcub | Why is it not enough to keep two sorted searchable sequences? | 
| 20:44:35 | foxcub | Are the intervals changing? | 
| 20:44:39 | Araq | well we need to be able to add and remove intervals and that should be fast too | 
| 20:44:55 | foxcub | Ok, then, why not a red-black tree? Or a skip list?! | 
| 20:45:13 | Araq | we currently use some balanced tree | 
| 20:45:22 | foxcub | Not fast enough? | 
| 20:45:29 | Araq | but I bet a simple array would be faster | 
| 20:45:49 | foxcub | How would updates work | 
| 20:45:50 | foxcub | ? | 
| 20:46:01 | Araq | "fast" is relative. could always be faster | 
| 20:46:11 | Jehan_ | The Boehm GC uses a hash table internally, for what it's worth (well, for 64-bit architectures). | 
| 20:46:24 | Araq | yeah I know | 
| 20:47:03 | foxcub | Explain how you want to use a gap array. I can’t say I’m familiar with them. | 
| 20:47:31 | Araq | gap array is simply a sorted array with "gaps" for easier insert and delete operations | 
| 20:47:52 | foxcub | And if a gap becomes full? God help you? | 
| 20:48:18 | Araq | you can always resize the array | 
| 20:48:38 | foxcub | And move data around. | 
| 20:49:15 | foxcub | I mean, it’s a fine heuristic, but I wouldn’t be so optimistic about it. | 
| 20:49:32 | foxcub | Is there anything you can say about the distribution of the queries? Can you optimize with respect to that? | 
| 20:49:43 | Jehan_ | At this point, though, why not use a B/B+-Tree with nodes allocated contiguously? | 
| 20:50:18 | Araq | well we could do a lot. I finished when the overhead was in the noise for bootstrapping in release mode | 
| 20:51:26 | Araq | foxcub: the queries all come in big batches | 
| 20:51:53 | Araq | you also could sort before that and keep it unordered most of the time | 
| 20:52:00 | foxcub | I was more thinking of the frequency with which intervals get hit. | 
| 20:52:20 | foxcub | Right, that would be an obvious thing to try. Sort the queries, and optimize with respect to that. | 
| 20:52:48 | Araq | I don't think this works at all | 
| 20:53:01 | Araq | the queries are psuedo random numbers | 
| 20:53:25 | Araq | well it's the content of the stack | 
| 20:54:11 | foxcub | Hold up. Which of the two things doesn’t work? | 
| 20:54:20 | Jehan_ | Is it really a problem other than for large stack sizes? | 
| 20:54:27 | foxcub | Trying to adjust to the distribution of the queries, or the sorting idea? | 
| 20:54:39 | * | milosn_ quit (Quit: Lost terminal) | 
| 20:55:03 | * | milosn joined #nimrod | 
| 20:55:31 | Araq | foxcub: I think your idea of optimizing towards the frequency with which intervals get hit won't work | 
| 20:56:16 | foxcub | I see. Well, that’s the kind of thing one can only judge from experience (of which I have none here). Of course, you could trivially profile how often the inervals get hit, and see if it’s really uniform. | 
| 20:56:24 | Araq | Jehan_: this part often shows up in debug builds of the compiler | 
| 20:57:09 | Araq | and I'm not sayin it's important, I'm saying it's a very useful thing to do for foxcub | 
| 20:57:38 | foxcub | foxcub wouldn’t know where to start. ;-) Nor does he really have time. | 
| 20:57:38 | Araq | it's a nice real world problem where lots of things could work | 
| 20:57:53 | Araq | oh I thought you do, sorry | 
| 20:58:19 | foxcub | No, it was more of: I have some old code, I should probably make it available to others. So I was looking for the best way to do it. | 
| 20:58:29 | foxcub | Not looking for work. | 
| 20:58:36 | Araq | too bad. | 
| 20:58:40 | Araq | we have plenty. | 
| 20:58:46 | Jehan_ | Hmm, if I wanted to speed that up, I'd probably go with a variant of a Bloom filter. | 
| 20:59:06 | foxcub | Yeah, sorry. As much as I’d love to get into compilers, I don’t think I can afford the time. | 
| 20:59:13 | Araq | yeah, didn't know about Bloom filters when I wrote it :P | 
| 20:59:17 | Jehan_ | Not sure if it would actually solve the problem, but it'd be fairly easy to implement. | 
| 20:59:46 | Araq | Bloom filter is similar to a hash table in that it requires O(N) for large objects | 
| 20:59:53 | Jehan_ | A proper Bloom filter wouldn't solve it, you'd need a variant that answers {yes, no, unknown}, where yes/no are guaranteed to be correct. | 
| 21:00:06 | Jehan_ | So that you'd only have to fall through to the expensive implementation for unknown. | 
| 21:00:11 | Araq | yes. | 
| 21:00:59 | Araq | but see above, O(N) for megabyte sized objects was not cool enough | 
| 21:01:00 | Jehan_ | As I said, I have no idea if that would actually help, but it'd be relatively easy to implement, so it might be worth a try. | 
| 21:01:11 | Jehan_ | Hmm? | 
| 21:01:36 | Araq | the interior pointer checking is only complex for objects larger than a page | 
| 21:01:36 | Jehan_ | I'd hash the page pointer to a fixed size hash table. | 
| 21:01:57 | Araq | no you need to hash *every* page the object consists of | 
| 21:01:58 | Jehan_ | Yes, but I would suspect the problem is mostly words that aren't interior pointers? | 
| 21:02:38 | Jehan_ | Or is it indeed primarily to find the base address for large objects? | 
| 21:03:09 | Araq | that's a problem too | 
| 21:03:34 | Araq | but you check if the page is part of some big object | 
| 21:03:43 | Jehan_ | Even so, I'd guess that probabilistically most interior pointers are on the same page. | 
| 21:03:45 | Araq | so you need to hash every page in that object | 
| 21:03:58 | Jehan_ | And you'd remember information from previous passes. | 
| 21:04:02 | Araq | how else could it work? | 
| 21:04:35 | Jehan_ | No, what I'd do is: have a hash table without collision resolution. If you have duplicate entries, the old one gets evicted. | 
| 21:05:02 | Jehan_ | I look up a word in the hash table. If it's already there, I've got a fast answer. Otherwise, I use the expensive algorithm and remember the result in the hash table. | 
| 21:05:32 | Araq | most lookups are failures though | 
| 21:05:49 | Araq | most stack locations are not pointers into the GC'ed heap | 
| 21:05:51 | Jehan_ | Works for failures, too, as long as there's some pattern to the failures (e.g., return addresses). | 
| 21:06:24 | Jehan_ | It works as long as there's a pattern to (address >> log2(pagesize)) | 
| 21:06:28 | Jehan_ | If not, it doesn't. :) | 
| 21:06:33 | Demos | Jehan_: so this is essentally a cache | 
| 21:07:00 | Jehan_ | Sort of, yeah. | 
| 21:07:14 | Araq | so essentially you're blacklisting pages on the fly | 
| 21:07:23 | Araq | that could indeed work | 
| 21:07:35 | Jehan_ | Yup. As I said, I have no idea how it would work out in practice. | 
| 21:07:57 | Jehan_ | But the thing is that it'd be pretty easy to implement and try. | 
| 21:08:02 | Araq | well foxcub will find out for us. | 
| 21:08:16 | Jehan_ | Since you only need a statically allocated data structure and some pretty simple queries on it. | 
| 21:08:46 | foxcub | Araq: Sorry I tuned out. What will I find out? | 
| 21:09:54 | Araq | foxcub: whether Jehan_'s idea will work out | 
| 21:10:36 | foxcub | The idea being to cache some of the answers? | 
| 21:10:50 | Araq | yes. | 
| 21:11:15 | foxcub | Right, so adapt to query distribution. Good idea. Alas I doubt I’ll be finding anything out. | 
| 21:11:47 | Araq | well it's up to you. you can either write docs and put your module into a babel package | 
| 21:12:14 | Araq | or you can hack the stack scanning part of our world class GC | 
| 21:13:17 | fowl | why did that sound sarcastic | 
| 21:13:22 | fowl | dont hate on the gc | 
| 21:13:23 | fowl | >_> | 
| 21:13:41 | foxcub | Got you. | 
| 21:13:43 | Araq | and satisfy our curiosity. | 
| 21:13:49 | Jehan_ | For what it's worth, I think the GC is actually pretty good. | 
| 21:14:33 | Jehan_ | Since it doesn't fall into the trap of trying to work with multiple threads (which is where all the hard stuff comes from). | 
| 21:15:47 | fowl | it performs well | 
| 21:19:18 | * | gsingh93 quit (Quit: Connection closed for inactivity) | 
| 21:20:05 | * | oxful quit (Ping timeout: 255 seconds) | 
| 21:41:12 | * | Jehan_ quit (Quit: Leaving) | 
| 21:49:04 | Varriount | I tried running that split example with the GC turned off. It ate all my memory in about 15 seconds. | 
| 21:53:17 | Araq | again, my code review in fact caught the problem but I considered it a temporary problem | 
| 21:55:28 | Araq | who uses split for high performance code anyway :P | 
| 21:59:39 | fowl | how do seq/string work without GC? when do they get freed? same q about refs | 
| 21:59:50 | Demos | I think they just leak | 
| 22:00:00 | Araq | simple. they don't, they leak. | 
| 22:00:18 | fowl | lame | 
| 22:00:23 | Araq | in theory we could use C++style memory managemen with seq/string | 
| 22:00:37 | Araq | in practice we don't yet | 
| 22:00:57 | fowl | maybe destructors could work | 
| 22:01:23 | Araq | that's what I'm talking about, yes | 
| 22:02:20 | fowl | i mean to say maybe it would it work now with the destructor mechanism | 
| 22:02:47 | Araq | nope | 
| 22:03:09 | Araq | destructors cause the usage of a named variable | 
| 22:03:34 | Araq | it's still unclear whether that's a misfeature or pure genius :P | 
| 22:06:16 | fowl | named? you mean assigned to a var in c? | 
| 22:06:44 | Araq | in nimrod | 
| 22:07:12 | fowl | oh | 
| 22:23:20 | NimBot | Araq/Nimrod devel fab8cee Araq [+0 ±5 -0]: minor tweaks; updated todo.txt | 
| 22:23:20 | NimBot | Araq/Nimrod devel 0049a2a Araq [+2 ±17 -0]: Merge branch 'devel' of https://github.com/Araq/Nimrod into devel | 
| 22:23:20 | NimBot | Araq/Nimrod devel 6a39155 Araq [+0 ±1 -0]: small bugfix for iterators | 
| 22:23:20 | NimBot | Araq/Nimrod devel 5710a05 Araq [+0 ±1 -0]: fixed a typo | 
| 22:23:20 | NimBot | 3 more commits. | 
| 22:24:08 | Araq | dom96: I also have fixed your async bug, let me know if you want to see how I did it | 
| 22:24:56 | Varriount | Araq: Have you ever written in Forth? | 
| 22:25:15 | Araq | no | 
| 22:25:30 | * | gsingh93 joined #nimrod | 
| 22:25:51 | Varriount | Hi gsingh93 | 
| 22:32:15 | dom96 | Araq: tell me | 
| 22:32:29 | dom96 | or rather, show me | 
| 22:32:53 | fowl | dom96, is it possible to do platform specific stuff in a .babel (like @if linux in nimrod cfg files) | 
| 22:33:36 | dom96 | fowl: nope, but I guess I should implement that. | 
| 22:34:57 | dom96 | Araq: can you just commit your fix? | 
| 22:35:22 | Araq | ok, I need to clean up this stuff | 
| 22:36:37 | * | Zuchto_ joined #nimrod | 
| 22:37:04 | * | Zuchto quit (Ping timeout: 258 seconds) | 
| 22:37:05 | * | Zuchto_ is now known as Zuchto | 
| 22:38:32 | * | Skrylar joined #nimrod | 
| 22:53:58 | NimBot | Araq/Nimrod devel d438ecc Araq [+0 ±3 -0]: async might work now reliably | 
| 22:53:58 | NimBot | Araq/Nimrod devel bd705a5 Araq [+0 ±8 -0]: compiler prepared for the new comment handling | 
| 22:54:18 | Araq | dom96: here you go but beware | 
| 22:54:29 | Araq | I was too tired to test it properly | 
| 22:55:11 | dom96 | Good. I'm glad I held off on the "good job" :P | 
| 22:59:03 | * | q66 joined #nimrod | 
| 22:59:04 | * | q66 quit (Changing host) | 
| 22:59:04 | * | q66 joined #nimrod | 
| 23:00:05 | * | Matthias247_ quit (Read error: Connection reset by peer) | 
| 23:01:12 | Araq | good night | 
| 23:01:21 | * | Matthias247 joined #nimrod | 
| 23:01:30 | dom96 | Araq: wait | 
| 23:01:37 | dom96 | It doesn't work at all now. | 
| 23:01:53 | Araq | what's the error? | 
| 23:02:22 | Araq | come on hurry | 
| 23:02:35 | dom96 | it's not accepting any connections | 
| 23:02:37 | dom96 | there is no error | 
| 23:02:58 | Araq | maybe I GC_unref too early? | 
| 23:03:08 | Araq | I tested it with these GC_unrefs deactivated | 
| 23:03:47 | * | Matthias247 quit (Read error: Connection reset by peer) | 
| 23:04:26 | dom96 | Araq: You made GetQueuedCompletionStatus less safe | 
| 23:05:39 | dom96 | how come? | 
| 23:06:01 | NimBot | Araq/Nimrod devel 81d4049 Araq [+0 ±1 -0]: bugfix: MS-GC GC_unref | 
| 23:06:16 | Araq | dom96: I didn't want to cast, that's why | 
| 23:06:55 | dom96 | that's very bug-prone | 
| 23:06:55 | Araq | fixed it btw | 
| 23:07:06 | Araq | dom96: so change it back | 
| 23:07:11 | Araq | I don't mind | 
| 23:08:11 | dom96 | k | 
| 23:08:15 | dom96 | lket me test | 
| 23:08:17 | dom96 | *let | 
| 23:08:51 | * | xenagi joined #nimrod | 
| 23:09:10 | OrionPK | araq have time to look at 1140 yet? | 
| 23:09:39 | Araq | async had highest priority, I can look at your bug tomorrow | 
| 23:11:30 | dom96 | Araq: Awesome it works! Good job. | 
| 23:11:38 | dom96 | What's interesting is how much faster mark and sweep is | 
| 23:12:27 | OrionPK | OK | 
| 23:13:00 | OrionPK | what's the trade off your mark and sweep? | 
| 23:13:08 | dom96 | It uses less memory too lol | 
| 23:13:13 | OrionPK | why isn't it default if it's faster? | 
| 23:13:13 | dom96 | well, 240 bytes less | 
| 23:13:15 | dom96 | but still | 
| 23:13:55 | Araq | OrionPK: it is no realtime gc | 
| 23:14:26 | OrionPK | ah | 
| 23:14:43 | dom96 | Araq: So why is M&S faster? | 
| 23:15:01 | Araq | because it often is? | 
| 23:15:18 | dom96 | really? | 
| 23:15:22 | dom96 | I've been living a lie. | 
| 23:16:16 | Araq | you only need to increase the size of your workset and the the RC GC wins | 
| 23:16:41 | dom96 | With both GCs the memory usage grows to ~5MB | 
| 23:16:46 | dom96 | I wonder what that 5MB is. | 
| 23:17:01 | Araq | "with both GCs"? | 
| 23:17:09 | Araq | how does that work? | 
| 23:17:12 | dom96 | Yeah, I created a hybrid! | 
| 23:17:19 | dom96 | Come on, isn't it obvious what I mean? | 
| 23:17:29 | Araq | ok | 
| 23:17:37 | Araq | well that's the threshold | 
| 23:17:41 | Araq | simple, hu? | 
| 23:18:20 | dom96 | threshold for max memory allocated? | 
| 23:18:53 | dom96 | Why so high? I want this to run on my Commodore 64. | 
| 23:18:54 | dom96 | :P | 
| 23:19:04 | Araq | before the GC or the cycle collector run | 
| 23:19:53 | Araq | good night | 
| 23:20:08 | dom96 | Araq: Bye. Thanks for fixing it! | 
| 23:20:30 | Araq | only took me weeks :P | 
| 23:20:52 | dom96 | well, we fixed a couple of issues over those weeks IIRC | 
| 23:20:53 | Araq | please don't make any mistakes anymore in the future | 
| 23:21:03 | dom96 | hah | 
| 23:21:09 | dom96 | How am I suppose to know that these are mistakes? | 
| 23:21:19 | dom96 | You need to make the compiler tell me. | 
| 23:21:37 | Araq | added it to my todo, as you can see | 
| 23:22:15 | Araq | bye | 
| 23:22:24 | dom96 | bye | 
| 23:22:36 | dom96 | So guys. New async is officially ready for prime time. | 
| 23:26:25 | Varriount | I stil wouldn' | 
| 23:26:38 | Varriount | I still wouldn't mark the API as stable just yet. | 
| 23:26:51 | dom96 | Yes, and I'm not. | 
| 23:27:03 | dom96 | But I need help porting all the stdlib modules to the new async stuff | 
| 23:28:27 | Varriount | It would help if we knew *how* to port the stdlib modules over. | 
| 23:29:22 | dom96 | wooh. Mark and sweep puts us ahead of Go's speed. | 
| 23:29:37 | Varriount | dom96: For what? | 
| 23:29:37 | Demos | I thought we were already ahead of Go? | 
| 23:29:44 | dom96 | Yeah, I guess it's time for me to write a blog post about how it works. | 
| 23:29:53 | dom96 | Demos: nope | 
| 23:30:09 | Demos | really? Wait what benchmark are we even talking about? | 
| 23:30:20 | * | q66 quit (Ping timeout: 252 seconds) | 
| 23:30:21 | dom96 | It's just a simple http server benchmark | 
| 23:31:23 | Demos | oh, OK. In general I would assume we are faster than Go, but that is Go's home turf | 
| 23:31:31 | dom96 | Yeah, in general we should be. | 
| 23:31:40 | dom96 | I'm talking about the new async stuff. | 
| 23:31:53 | dom96 | I did force it to run on one thread though. | 
| 23:32:18 | Demos | can anyone think of a reason the type Foo = bar | baz stuff could not just be a macro or even a template? | 
| 23:32:20 | dom96 | and Go is sending a lot more data. | 
| 23:32:32 | dom96 | so it's a bit unfair | 
| 23:33:04 | Varriount | dom96: You do know that, at least on Windows, using IOCP means that Nimrod *should* be able to scale well? | 
| 23:33:06 | dom96 | Demos: May need to wrap it in some keyword or operator. | 
| 23:33:56 | dom96 | Varriount: yes, that is the whole point of IOCP. | 
| 23:34:05 | dom96 | Varriount: Go uses it too i'm sure. | 
| 23:35:02 | EXetoC | Demos: what are other ways to construct type classes? | 
| 23:35:57 | EXetoC | I don't get it | 
| 23:36:15 | EXetoC | well | 
| 23:36:17 | dom96 | It's a pity that we didn't save the `|` for algebraic data type construction. | 
| 23:36:49 | * | foxcub quit (Quit: foxcub) | 
| 23:36:58 | EXetoC | I suppose you just want to construct the AST without going through the compiler | 
| 23:38:06 | Demos | well `|` is very close to algebraic data types | 
| 23:38:38 | Demos | I want to transform it into a user defined typeclass so that I can start removing code from our overload resolution monster | 
| 23:38:39 | EXetoC | yes at compile-time | 
| 23:39:16 | Demos | right, for runtime you have variants, seems to me that the most common uses of algebraic data types can be done just fine at compile time | 
| 23:39:26 | EXetoC | see nnkTypeClassTy | 
| 23:39:57 | EXetoC | it has many uses at run time | 
| 23:40:23 | EXetoC | and someone was working on something that would've simplified the usage | 
| 23:40:57 | EXetoC | whoever was hanging in the channel but hasn't been seen for a month or two | 
| 23:41:21 | Varriount | zahary maybe? | 
| 23:41:59 | EXetoC | nope would've remembered that :p | 
| 23:42:22 | Varriount | dom96: Right now I'm tackling fsmonitor. One thing at a time. | 
| 23:42:53 | EXetoC | I think he was wearing sunglasses on his github image | 
| 23:44:28 | * | DAddYE_ joined #nimrod | 
| 23:46:00 | dom96 | Varriount: sure. | 
| 23:47:06 | EXetoC | mflamer maybe | 
| 23:47:11 | * | DAddYE quit (Ping timeout: 240 seconds) | 
| 23:52:31 | dom96 | good night | 
| 23:56:57 | * | darkf joined #nimrod |