00:05:56 | * | nande joined #nimrod |
00:30:39 | * | DAddYE_ quit (Remote host closed the connection) |
00:52:47 | * | Demos quit (Read error: Connection reset by peer) |
00:53:51 | * | dom96 quit (Ping timeout: 252 seconds) |
00:53:51 | * | Amrykid quit (Ping timeout: 252 seconds) |
00:57:22 | * | springbok joined #nimrod |
01:01:01 | * | Amrykid joined #nimrod |
01:01:18 | * | Demos joined #nimrod |
01:02:31 | * | dom96 joined #nimrod |
01:15:33 | * | njoejoe joined #nimrod |
01:17:45 | njoejoe | is it true that the honey badger is nimrod's mascot? |
01:19:56 | Varriount | njoejoe: Yes. Although the design hasn't been finalized yet. |
01:20:50 | Varriount | njoejoe: This is the prototype. http://reign-studios.net/philipwitte/nimrod/mascot.png |
01:22:01 | njoejoe | sweet! snakes (of all kinds...) beware! :-) |
01:22:45 | Varriount | njoejoe: Where did you hear about the mascot, by the way? |
01:23:34 | njoejoe | https://github.com/gradha/quicklook-rest-with-nimrod |
01:24:04 | njoejoe | and the youtube video he links to is hilarious |
01:24:27 | Varriount | njoejoe: Actually, that page predates the decision of Nimrod's mascot, although it did help with the idea. |
01:25:14 | Varriount | I've actually looked into replicating Gradha's plugins for Windows explorer, however integrating with the Window's shell is a bit... complicated. |
01:28:00 | njoejoe | yeah i wouldn't envy that job. i'm on linux so haven't tried quicklook myself |
01:28:16 | Varriount | For one thing, it means implementing C++ interfaces, which I don't know how to do in Nimrod. :/ |
01:32:18 | Demos | Varriount: hint: check out the bits of code in the windows header that compile when you have C_INTERFACE defined |
01:33:17 | * | DAddYE joined #nimrod |
01:35:58 | Varriount | Demos: And how would I know which bits were compiled? |
01:36:38 | Demos | #ifdef? if you have a full project VS can dim the non-compiled parts |
01:36:51 | Demos | but you will see like #ifdef C_INTERFACE ..... |
01:37:08 | Varriount | Demos: I'm looking at how Pascal does it. |
01:37:16 | Demos | you get structs with like one member that is just vtbl and you wrap that |
01:37:29 | Demos | same abi as the C++ version I bet |
01:37:44 | Demos | you could also use IDispatch, but that is not really how you want to do it in native code |
01:37:56 | * | DAddYE quit (Ping timeout: 252 seconds) |
01:38:55 | njoejoe | how to get data from readBuffer() into a string? https://gist.github.com/anonymous/eda0623f4d143d797cdc |
01:39:37 | Varriount | njoejoe: Are you trying to implement a getUserInput() like procedure? |
01:40:31 | * | brson quit (Ping timeout: 252 seconds) |
01:40:39 | Varriount | njoejoe: Also, cast to an array of chars. |
01:42:16 | njoejoe | not getuserInput like... just experimenting with reading data. |
01:42:17 | * | brson joined #nimrod |
01:43:13 | Varriount | njoejoe: Well, you could allocate an array instead of using an unmanaged allocation, and cast as needed. |
01:46:01 | njoejoe | trying lc = cast [array[char]]( buf).countLines but it doesn't like that syntax |
01:46:47 | Varriount | njoejoe: One moment. |
01:48:16 | * | bjz joined #nimrod |
01:51:19 | * | wooya joined #nimrod |
01:52:19 | Varriount | njoejoe: https://gist.github.com/Varriount/6c20d6f47add09c694ff |
01:52:24 | Varriount | Hi wooya, bjz |
01:52:32 | bjz | o/ |
01:53:09 | fowl | Varriount, did you try that? |
01:53:37 | Varriount | fowl: No. But it passed nimrod's check command. |
01:53:43 | Varriount | I have to let my dog out. |
01:55:05 | fowl | Varriount, njoejoe, (addr buf) is not correct, use addr buf[0] or buf.cstring |
02:03:53 | * | dom96 quit (Ping timeout: 252 seconds) |
02:05:13 | EXetoC | Demos: no, nnkTypeClassTy is for user-defined type classes apparently |
02:05:24 | EXetoC | but it's just a matter of using macros.dumpTree to find out |
02:05:53 | EXetoC | no wait, you said user-defined. |
02:06:03 | Demos | right, I want to elminate | and & in favor of code that generates user defined typeclasses |
02:06:18 | EXetoC | while also mentioning '|'. but the terminology is confusing |
02:07:11 | * | wooya quit (Ping timeout: 240 seconds) |
02:07:14 | njoejoe | thanks Varriount and fowl . It compiles now! I tried addr buf[0] but still buf.countLines() is coming up with 0. let me look closer |
02:07:58 | EXetoC | those are user-defined yes, but are called just "type classes", and then you have "user-defined type classes" which is that new feature (type y = generic c ...) |
02:09:01 | Demos | I know what they are. I was asking if it was feasable to replace '|' and '&' with macros generateing user defined typeclasses |
02:09:01 | * | dom96 joined #nimrod |
02:10:55 | fowl | Demos, no because those types can go anywhere (proc x(y: int|string)) and not just in a type section |
02:11:14 | fowl | Demos, what is & btw, is that a thing? |
02:11:31 | Demos | I thought it was, maybe not though |
02:11:47 | fowl | hm i can see how it could be used to combine type classes |
02:11:51 | Demos | right I guess we would need some kind of lambda typeclasses |
02:11:55 | fowl | but saying type1 & type2 doesnt make sense |
02:13:33 | EXetoC | fowl: you can't do something like "type createTypeClass(...))"? |
02:13:35 | * | bystander joined #nimrod |
02:14:31 | fowl | EXetoC, you mean get the type of a function call? |
02:15:15 | EXetoC | actually, you can't just inject something into the tree like that, right? |
02:16:44 | fowl | EXetoC, im not sure what you mean, you have a bigger example? |
02:17:47 | * | bystander quit (Client Quit) |
02:18:34 | EXetoC | no I'm only referring to AST construction, which is what this concerns |
02:20:02 | fowl | ohhh |
02:20:10 | EXetoC | as in "type createTypeClass("x", int, char))" rather than "type x = int|char" |
02:20:19 | fowl | so createTypeClass is a macro which takes something like int | char |
02:20:43 | fowl | EXetoC, possibly. ive never tried a macro in a type section before |
02:21:05 | fowl | you could certainly do it outside (declTypeClass(x, int|char)) |
02:21:30 | fowl | EXetoC, but tbh this is not very useful, we dont need to use type classes for this, type x = int|char works |
02:22:10 | EXetoC | yes but his motivation is to reduce the complexity of the compiler |
02:23:20 | fowl | whos? |
02:23:30 | fowl | oh |
02:23:37 | Demos | yeah |
02:24:00 | EXetoC | and not being able to inject nodes from anywhere would be too big of a limitation |
02:24:06 | Demos | araq wants my summer project to be rewriting sigmatch.nim |
02:24:15 | Demos | what if I used that getCallsite thing |
02:24:54 | fowl | Demos, make a superCallsite() function to access the level above the callsite (:< |
02:24:54 | EXetoC | 'callsite' I think. I don't think these are regarded as call sites, but maybe I'm wrong |
02:25:11 | Demos | oh :( |
02:25:37 | Demos | well I could transform it within sigmatch, not as much of a simplification but still something |
02:26:19 | renesac | Demos, while you are at it, try fixing unsigned integer handling too :) |
02:26:34 | EXetoC | I don't think you need to access the call site. this only concerns the node itself, doesn't it? |
02:26:54 | EXetoC | so you'd need some way of injecting a node from anywhere in the tree |
02:30:39 | * | dom96 quit (Ping timeout: 252 seconds) |
02:30:53 | EXetoC | well, you could explicitly modify the parent node. The inability to go higher up than the callsite is by design IIRC, and I suppose this would count as that |
02:31:45 | * | Amrykid quit (Ping timeout: 252 seconds) |
02:31:59 | * | bjz quit (Quit: Textual IRC Client: www.textualapp.com) |
02:32:02 | * | Amrykid joined #nimrod |
02:32:23 | fowl | renesac, what aspect of it is broken |
02:33:35 | Demos | another option is to change the parser, but I think I will start just messing with sigmatch's input |
02:34:10 | * | DAddYE joined #nimrod |
02:36:55 | * | bjz joined #nimrod |
02:37:22 | renesac | https://github.com/Araq/Nimrod/issues/936 |
02:37:35 | * | bjz quit (Remote host closed the connection) |
02:37:55 | * | bjz joined #nimrod |
02:38:01 | renesac | I also can't index arrays with uints, and I should be able with uints types smaller than int. |
02:38:18 | * | DAddYE quit (Ping timeout: 240 seconds) |
02:41:31 | * | dom96 joined #nimrod |
02:49:26 | * | DAddYE joined #nimrod |
02:56:30 | * | brson quit (Quit: leaving) |
03:45:35 | * | BitPuffin quit (Ping timeout: 240 seconds) |
03:54:24 | * | xenagi quit (Quit: Leaving) |
04:28:58 | * | BitPuffin joined #nimrod |
04:44:18 | Skrylar | hmm time to make an image api ;f |
04:45:41 | * | darithorn quit (Ping timeout: 252 seconds) |
04:49:40 | Skrylar | Demos / Varriount / BitPuffin: does a pipe model sound good for loading images? |
04:50:27 | Skrylar | what i was thinking is to have different stack-based objects, so you set up a "Load PNG -> x type of buffer -> x-to-y-converter -> byte output" or such |
04:51:04 | Skrylar | with a kind of 'pull' model so you can load multiple images in to the same byte buffer and shoot them off to GL without reallocs |
04:53:40 | fowl | renesac, im looking for another unsigned issue, the one wiht for loops |
04:54:04 | renesac | what issue? |
04:54:11 | renesac | of the loop wrapping around? |
04:54:16 | fowl | yea |
04:54:41 | renesac | I discovered it is a very old bug |
04:55:07 | renesac | https://github.com/Araq/Nimrod/issues/148 |
04:55:25 | renesac | I wonder how well the compiler track the max and minimum value of variables |
04:55:35 | BitPuffin | Skrylar: sounds typical |
04:56:38 | renesac | it is just a matter of optimizing where you can prove that you can, and doing the sane thing otherwise |
04:57:22 | fowl | renesac, the `..` iterator adrianv suggests can be constrained to uints so it doesnt affect signed ones |
04:57:47 | renesac | this problems happens with signed integers too |
04:58:04 | renesac | it overflows and continues smaller or equal to it's maximum value |
04:58:31 | fowl | oh |
04:59:39 | Skrylar | BitPuffin: hm? i haven't seen one of those |
04:59:58 | BitPuffin | Skrylar: well I meant typical for image processing |
05:00:02 | Skrylar | BitPuffin: i thought the typical api was to just have a "LoadPNG(...)" file and then have to do everything else yourself |
05:01:43 | BitPuffin | Skrylar: well not for image loading perhaps |
05:01:58 | BitPuffin | Skrylar: but image processing |
05:02:02 | BitPuffin | like piping things around |
05:02:04 | fowl | is there a benchmarking module? |
05:02:37 | * | nande quit (Read error: Connection reset by peer) |
05:02:40 | renesac | fowl, I was making one |
05:02:56 | renesac | https://github.com/ReneSac/Benchmark |
05:02:56 | fowl | i see it! its on google |
05:03:08 | fowl | renesac, you're famous! |
05:03:23 | renesac | I think it is broken right now |
05:03:27 | renesac | the version on github |
05:03:39 | renesac | maybe I should commit my unfinished changes... |
05:04:06 | Skrylar | not to be a nitpicking derp, but when nimprof works on windows one can use that too :S |
05:04:11 | fowl | it needs a template renesac |
05:04:14 | Skrylar | admittedly, yeah, benchmark frameworks have benefits |
05:04:25 | renesac | I'm making a 'timeit' like template |
05:04:33 | renesac | what template you want? |
05:04:55 | fowl | something simple |
05:05:03 | fowl | timeit(iterations): body |
05:05:14 | fowl | er benchmark() |
05:05:27 | renesac | it has a timeit name |
05:05:33 | fowl | oh you probably cant use that since its the module name |
05:05:57 | renesac | I wanted a 'bench' function or template eventually |
05:06:06 | renesac | that does more than just time the overall speed |
05:06:20 | renesac | maybe registering information about the gc/memory usage, etc |
05:06:32 | renesac | I don't know yet |
05:08:03 | renesac | I need to wrap the higher precision timers to use in the bench module |
05:08:09 | renesac | unlike python, nimrod is fast |
05:09:08 | renesac | small snippets are faster than what cputime() has precision |
05:09:28 | renesac | or maybe simply print timeit results differently |
05:14:10 | fowl | whats timeit |
05:15:50 | fowl | renesac, laps should be preallocated for efficiency |
05:16:07 | renesac | https://docs.python.org/3.4/library/timeit.html |
05:16:20 | renesac | fowl, there is no api for that |
05:16:38 | fowl | oh |
05:16:54 | fowl | renesac, what do you mean? |
05:17:18 | renesac | how should I prealocate? |
05:17:34 | fowl | newseq[t](1024) |
05:18:00 | renesac | then I can't use ".add()" |
05:18:09 | renesac | it complicates the logic quite a bit |
05:18:18 | fowl | no, you use an integer index |
05:18:33 | renesac | yeah, it complicates the logic |
05:18:46 | fowl | i believe in you |
05:18:48 | renesac | and I would need to store that integer index somewhere |
05:18:53 | renesac | bloating the struct |
05:19:02 | fowl | thats 0 overhead |
05:19:13 | renesac | how so? |
05:19:29 | fowl | well stopwatch instances are rare, and you use them imperatively |
05:19:29 | renesac | I wanted to set the capacity, not the lenght |
05:19:47 | fowl | they're not on the heap, being allocated spuriously |
05:20:00 | renesac | but the struct would still have to contain that integer index |
05:20:48 | fowl | if you want to use add you can just set the length of the seq to 0 |
05:21:03 | fowl | after creating it |
05:21:31 | fowl | i just like imperative programs :( |
05:21:44 | fowl | store an index, loop while true, go nuts |
05:23:54 | renesac | fowl, assuming that setting the lenght to zero woldn't trigger a reallocation |
05:24:05 | renesac | to shrink the seq |
05:25:02 | fowl | renesac, knowing |
05:25:45 | fowl | renesac, thats how you preallocate a seq, and a function similar to newStringOfCap should be added for it |
05:26:44 | fowl | sorry, i thought this was common knowledge >_> |
05:27:28 | renesac | yeah, I already talked with araq about the 'new of cap'thing |
05:27:49 | renesac | I like better an optional parameter for 'newSeq', but he likes a new proc |
05:29:44 | Skrylar | okay, one thing that bothers me with seqs |
05:29:50 | Skrylar | the capacity is hidden |
05:30:16 | Skrylar | Rust vectors do have a ".capacity" and ".size" set of operators |
05:30:46 | Skrylar | there is newString(size) and newSeq() iirc |
05:31:42 | fowl | newseq[t](size) |
05:32:10 | renesac | and there is 'setLen' |
05:32:22 | Skrylar | newseq has some silliness issues on occasion |
05:32:37 | Skrylar | i seem to remember bah.newSeq(amt) blowing up, but newSeq(bah, amt) worked |
05:32:56 | renesac | https://github.com/ReneSac/Benchmark <-- updated |
05:33:43 | renesac | I may change from using floats to using another time representation |
05:33:54 | renesac | like a tuple with seconds and nanoseconds |
05:34:03 | renesac | or something like that |
05:36:59 | renesac | I need to put it in babel too |
05:37:20 | fowl | lets start an unofficial repository |
05:38:02 | renesac | ? |
05:38:15 | fowl | for babel packages.json |
05:38:50 | renesac | why? |
05:38:52 | fowl | nvm, im realizing i dont have a reason to, besides to be a rebel |
05:39:00 | renesac | :P |
05:39:49 | renesac | fowl, is 'timeit' the template you had in mind? |
05:41:05 | fowl | # the minimum time is the most consistent. |
05:41:22 | fowl | wouldnt you use the average time? |
05:41:42 | renesac | no, other timmings can be affected by cold cache, other processes, etc |
05:41:57 | renesac | timeit also uses the minimum time, and I agree with that |
05:42:26 | renesac | but now I'm giving the laps seq, so you can calculate your own statistics too, even when using timeit |
05:42:48 | renesac | if you want |
05:44:28 | fowl | thsi looks good |
05:45:56 | renesac | I'm not sure if nimrod needs the 'setup' parameter that python timeit has |
05:56:43 | fowl | renesac, i think this unsigned loop issue needs a solution more than it needs an efficient solution, but i will test all the options in that issue with your benchmark thing |
06:05:37 | * | njoejoe quit (Quit: Page closed) |
06:06:18 | * | Jesin quit (Remote host closed the connection) |
06:09:18 | * | gsingh93 quit (Quit: Connection closed for inactivity) |
06:11:17 | renesac | fowl, I think it is not precise enough for this... but you can try |
06:11:36 | renesac | and as I said, the big problem is that it isn't unsigned only, it affects all nimrod for-loops |
06:12:38 | renesac | that is why efficiency is so important |
06:12:42 | renesac | good nigh |
06:14:08 | renesac | actually, precision is not so important if you just give the timming for each repetition instead of trying to give the time per loop as `$` for timeit currently does |
06:14:51 | * | DAddYE quit (Remote host closed the connection) |
06:15:47 | Demos | for benchmarking you want a precise timer |
06:33:33 | Skrylar | precise timers are hard to get though |
06:33:46 | Skrylar | a lot of them are based on rdtsc which doesn't work in SMP systems |
06:33:52 | * | DAddYE joined #nimrod |
06:43:13 | Demos | does it even matter though if it is out of synch on different cores |
06:43:34 | Demos | you are measureing the time some code takes to run, so you just measure each bit of code and sum the times at the end |
06:43:42 | Skrylar | no, its not that its out of sync on cores |
06:43:56 | Skrylar | it was something about how if CPU 1 activates, and CPU 2 does something, they both 'share' the rdtsc counter |
06:44:11 | Demos | oh, OK never mind |
06:44:11 | Skrylar | which means if a background process gets some cpu time during your test, the timer is thrown way off |
06:44:34 | Skrylar | disabling all but one core and rebooting is the only way to get a usable result out of rdtsc nowadays |
07:01:07 | * | oxful joined #nimrod |
07:35:58 | Araq | OrionPK: the bug is not that easy to fix, but I'll have a workaround by tonight |
07:37:56 | Araq | alternatively you can write your procs so that the codegen doesn't need more than 40 registers :P |
07:50:41 | * | wan quit (Quit: leaving) |
08:05:54 | * | DAddYE quit (Remote host closed the connection) |
08:06:21 | * | DAddYE joined #nimrod |
08:10:42 | * | DAddYE quit (Ping timeout: 240 seconds) |
08:19:18 | * | runvnc quit (Ping timeout: 240 seconds) |
08:40:32 | * | Demos quit (Read error: Connection reset by peer) |
09:21:08 | * | BitPuffin quit (Quit: WeeChat 0.4.3) |
09:22:40 | * | runvnc joined #nimrod |
09:55:53 | * | BitPuffin joined #nimrod |
09:58:55 | Skrylar | later i get to bind the webp headers. wheee. |
11:12:48 | Araq | Skrylar: who has more babel bricks? you or fowl ? ;-) |
11:30:57 | * | faassen joined #nimrod |
11:39:33 | Skrylar | probably fowl |
11:39:40 | * | springbok_ joined #nimrod |
11:39:50 | Skrylar | none of my stuff is actually in babel, and i tend to clump |
11:40:56 | Skrylar | skUnicode is stuff i should get around to merging with stdlib, skTypeface grabs glyphs from gdi/freetype, Skylights is "everything non-GUI a program needs" and Syl is "the GUI" |
11:41:41 | * | springbok quit (Ping timeout: 264 seconds) |
11:48:40 | Araq | please put your stuff on babel |
12:01:55 | * | BitPuffin quit (Quit: WeeChat 0.4.3) |
12:02:15 | * | BitPuffin joined #nimrod |
12:27:08 | * | springbok_ is now known as springbok |
12:27:32 | * | springbok quit (Changing host) |
12:27:32 | * | springbok joined #nimrod |
12:32:55 | Araq | hi springbok welcome |
12:33:09 | springbok | morning |
12:54:58 | EXetoC | "await sock.connect("127.1.0.1", TPort(27017))" the proc returns after that even though more statements follow |
12:57:13 | EXetoC | https://gist.github.com/EXetoC/d32a327c7af0a06d2745 |
13:02:18 | * | BitPuffin quit (Ping timeout: 240 seconds) |
13:07:47 | Araq | EXetoC: afaik exception handling doesn't really work with 'async' so maybe it raises something that gets lost? |
13:15:30 | * | darkf quit (Quit: adjacent) |
13:20:34 | * | nande joined #nimrod |
13:35:36 | EXetoC | right |
13:36:54 | OrionPK | Araq oh yeah? any details? |
13:37:14 | EXetoC | I don't know how it can be considered ready, at least in general. for example, I'm trying to get MongoDB not to disconnect and that requires some trial and error |
13:37:29 | OrionPK | 40 registers?! |
13:38:14 | Araq | EXetoC: it's not considered to be ready |
13:38:25 | Araq | just recently we got it crash free ... |
13:40:24 | Araq | OrionPK: that's the arbitrary limit in the VM codegen |
13:40:28 | EXetoC | I guess I misinterpreted "ready for prime time". I'll keep trying to do something practical with it |
13:41:06 | Araq | the VM has 255 registers but after 40 the register allocator decides to be more aggressive with re-using registers |
13:41:27 | Araq | hence the 'if false' affecting codegen |
13:41:28 | EXetoC | and the nnkDiscardStmt case has an out-of-bounds bug which I'll report soon |
13:41:29 | OrionPK | araq interesting... but the code that I put in the ticket isnt that sophisticated |
13:41:47 | OrionPK | wouldnt this potentially affect a lot of macros |
13:42:02 | Araq | I don't kow the details yet, OrionPK |
13:42:06 | OrionPK | mmk |
13:42:25 | Araq | but a lot of macros work with the new engine ... |
13:42:33 | OrionPK | ya |
13:42:38 | OrionPK | obviously :) |
13:43:25 | Araq | EXetoC: I'm sorry you got a wrong impression. I thought we had updated the news message |
13:44:04 | OrionPK | araq I am using parseutils pretty heavily |
13:44:24 | * | [1]Endy joined #nimrod |
13:46:23 | EXetoC | Araq: it's fine. I'm doing things of little importance as always |
13:46:41 | EXetoC | not using 'await' seems to be fairly painless |
13:47:10 | Araq | OrionPK: afaict parseutils is not the problem |
13:47:16 | EXetoC | that might change at some point, but all I need is a convenience proc for now |
13:48:42 | OrionPK | well araq, happy to keep you on your toes ;P |
13:49:23 | Araq | I just wish we had no bugs |
13:49:24 | OrionPK | i'd like to put the templates lib out on babel once my tests are passing again |
13:49:33 | Araq | then I could implement new features all the time ... |
13:49:36 | OrionPK | :D |
13:50:10 | Araq | and we haven't run out of ideas yet. at all. |
14:05:31 | * | nande quit (Read error: Connection reset by peer) |
14:10:46 | * | nande joined #nimrod |
14:21:11 | * | isenmann quit (Quit: Leaving.) |
14:23:22 | * | BitPuffin joined #nimrod |
14:27:31 | renesac | if those 255 registers are exceeded, do the VM push things to a stack? |
14:37:52 | * | Jesin joined #nimrod |
14:43:57 | dom96 | hello |
14:44:08 | dom96 | EXetoC: This is no bug. You forgot to call runForever() |
14:46:50 | EXetoC | ok |
14:47:31 | * | menscrem joined #nimrod |
14:47:54 | menscrem | hi all |
14:47:57 | dom96 | Perhaps I shouldn't have said it's ready for prime time. I do want you to use it *now* though. |
14:48:10 | dom96 | hi menscrem |
14:51:14 | * | Skrylar quit (Ping timeout: 276 seconds) |
14:53:45 | * | darithorn joined #nimrod |
15:22:56 | dom96 | I wonder what the best way to parallelize the async stuff is. |
15:32:20 | OrionPK | dont await them |
15:32:39 | OrionPK | do async procs return a task handle of some sort? |
15:33:14 | dom96 | they return a future |
15:33:45 | dom96 | by parallelize I'm talking about distributing them over multiple threads so that all cores can be used. |
15:34:18 | OrionPK | ah |
15:34:32 | OrionPK | like a threadpool like concept? |
15:35:57 | dom96 | yes |
15:37:08 | OrionPK | maybe await automatically puts tasks into the threadpool queue, where the queue defaults to the size of logical processors |
15:38:45 | dom96 | From what I have read it seems that IOCP has been designed to be used with multiple threads. |
15:39:24 | dom96 | It distributes the events evenly among threads. |
15:40:44 | * | bjz quit (Ping timeout: 265 seconds) |
16:00:38 | * | untitaker quit (Ping timeout: 240 seconds) |
16:01:41 | * | BitPuffin quit (Ping timeout: 250 seconds) |
16:06:07 | * | untitaker joined #nimrod |
16:07:48 | * | BitPuffin joined #nimrod |
16:10:49 | * | Jehan_ joined #nimrod |
16:11:59 | * | DAddYE joined #nimrod |
16:28:47 | * | DAddYE quit (Remote host closed the connection) |
16:32:23 | * | DAddYE joined #nimrod |
16:49:32 | * | BitPuffin quit (Ping timeout: 276 seconds) |
16:51:52 | * | Matthias247 joined #nimrod |
17:02:08 | * | BitPuffin joined #nimrod |
17:02:36 | * | superfunc joined #nimrod |
17:02:57 | superfunc | Does the standard library use GC? |
17:03:34 | dom96 | yes |
17:04:57 | superfunc | thanks dom |
17:06:40 | * | brson joined #nimrod |
17:11:14 | superfunc | Do you think it hurts the language's viability as a systems language? |
17:14:39 | EXetoC | it shouldn't. the language allows it to be disabled at both run-time and compile-time. the standard library might end up being allocation-agnostic some time in the future |
17:14:51 | EXetoC | but some people have a narrow-minded view of garbage collection |
17:14:57 | superfunc | I agree |
17:15:09 | superfunc | I think the "It is GC |
17:15:17 | superfunc | d so lets ignore it for sys prog |
17:15:23 | superfunc | is ridiculous. |
17:17:05 | EXetoC | yup. we have a pretty performant GC already AFAIK |
17:17:18 | EXetoC | several in fact, one of which is a realtime GC |
17:18:55 | * | DAddYE_ joined #nimrod |
17:19:04 | superfunc | people complain about D for a similar reason IIRC |
17:19:26 | superfunc | The only real way to combat that, is to write high performance programs in it, which I hope to complete this summer |
17:19:56 | EXetoC | D seems to have a pretty bad GC though, but they have made the standard library less dependent on it over the years |
17:22:45 | * | DAddYE quit (Ping timeout: 252 seconds) |
17:36:25 | * | q66 joined #nimrod |
17:36:25 | * | q66 quit (Changing host) |
17:36:25 | * | q66 joined #nimrod |
17:39:30 | OrionPK | anyone know if the nimrod-code/sdl2 lib supports joystick/gamepads? |
17:39:52 | * | Mat3 joined #nimrod |
17:39:58 | Mat3 | good afternoon |
17:40:47 | OrionPK | ah nm, looks like you pass it in w/ init |
17:41:54 | superfunc | nice, I just started playing with SDL2 port recently |
17:42:08 | superfunc | will probably be writing some useful high level wrappers as I work on my project |
17:57:25 | Araq | superfunc: D's design is very hostile to any efficient GC implementation though. Nimrod is in a completely different league here. |
17:58:51 | EXetoC | dom96: so it works so far, though I'm only doing basic things like I said. I wonder how debuggable it is when exceptions are raised though |
17:59:16 | EXetoC | I found something practical to use it for. I'm writing an interface for the mongodb wire protocol rather than using an existing API |
17:59:37 | dom96 | I think stack traces could use some improvement. |
18:00:17 | superfunc | Araq: Thanks. I assumed this was the case. |
18:00:36 | dom96 | And there is still that restriction that you cannot catch exceptions inside the async procs. |
18:01:24 | EXetoC | we don't have anything for binary serialization I think, so maybe I can work on that too |
18:01:39 | EXetoC | a pluggable marshal.nim has come up before. maybe I should look into that |
18:03:03 | dom96 | Araq: How should async be parallelized? |
18:04:22 | EXetoC | and then somehow add support for endianness |
18:07:12 | * | njoejoe joined #nimrod |
18:07:44 | Araq | dom96: dunno, you only need a simple load balancer |
18:08:04 | dom96 | Araq: I need to share the dispatcher between threads on Windows. |
18:08:19 | dom96 | shared refs would be very helpful |
18:08:26 | Araq | I know :P |
18:08:35 | Araq | but we won't get them anytime soon |
18:09:17 | dom96 | how should I allocate the ref then? |
18:09:32 | * | Demos joined #nimrod |
18:09:56 | Araq | depends, I need to see the code |
18:16:03 | njoejoe | dom96: are you working on making jester async? I will gladly help testing it. btw, I ran into a nice little article on control flow and async with node.js: http://book.mixu.net/node/ch7.html |
18:17:17 | dom96 | njoejoe: Not currently. Thinking of writing an article on my blog about how the new async stuff works. |
18:18:24 | njoejoe | i'll be first in line to read it :-) |
18:21:34 | EXetoC | dom96: when does it become more complicated to poll the state of the future in a loop? possibly with just a simple generic proc shortcut |
18:22:17 | * | springbok quit (Ping timeout: 252 seconds) |
18:23:44 | dom96 | EXetoC: Futures have a callback field, it would be easier to use that and likely more efficient too. |
18:24:05 | dom96 | But then you have "callback hell". |
18:26:19 | dom96 | What you have in mind (if I understand it correctly) would go around the callback hell, but you would block the current thread waiting for the future to finish. |
18:26:23 | Araq | dom96: how do you know it needs to be shared? |
18:27:42 | EXetoC | dom96: I was only using the other fields before |
18:28:20 | EXetoC | but then you need to manage arbitrary instances of PFuture |
18:28:32 | dom96 | Araq: http://www.drdobbs.com/cpp/multithreaded-asynchronous-io-io-comple/201202921?pgno=1 |
18:29:17 | EXetoC | *variations |
18:52:02 | renesac | Araq, you basically need a faster tree thing for the interior pointer search, right? |
18:52:12 | renesac | have you looked at b+trees in memory? |
18:53:16 | Araq | as I said, no I haven't |
18:53:52 | renesac | I've been wanting to write one, but haven't got around doing that |
18:54:06 | renesac | mostly for fun though... |
18:55:06 | Araq | I think the fastest is a sorted array |
18:55:13 | Araq | *fastest way |
18:56:00 | renesac | and as for the GC performance, if you get unique/borroed pointers working, the compiler can optimize a lot of code to elide the GC |
18:58:04 | Araq | well escape analysis is not that hard |
18:58:41 | * | gsingh93 joined #nimrod |
18:59:01 | Jehan_ | Unique/borrowed pointers do add a lot of complexity and hurt reusability, though. |
18:59:07 | renesac | AND it becomes a big selling point against rust (but they will probably come saying that nimrod analysis isn't as advanced as rust's, or something) |
19:00:24 | Jehan_ | It depends on who you ask. The cost of obsessively avoiding GC is a price that I am generally not willing to pay, for example. |
19:01:04 | renesac | Jehan_, the idea is that it would enable transparent optimization by nimrod compiler |
19:01:43 | Demos | rust just looks /really/ complex, and once you start doing things where memory management is actually hard the unique/borrowed pointers become really difficult to use |
19:01:48 | Matthias247 | yes, the compiler could detect for temporaries that a unique pointer is sufficient. Or even a stack allocation |
19:01:55 | Jehan_ | I know. The problem is that it's not free. |
19:02:38 | renesac | it would be like using the GC (deffered referencing counting), but the compiler can totaly ignore counting the references behind your back if it can prove correctness of borrowing/transfering ownership |
19:02:39 | Jehan_ | The major benefit of having automatic memory management is that you don't have to clutter APIs with assumptions about memory management, ownership, etc. |
19:03:08 | Demos | do you even always want unique pointers for temporaries? with a GC you could better batch the free operation... Not sure though |
19:03:12 | renesac | if it can't, then you have today's performance |
19:03:19 | Jehan_ | Rust pretty much destroys this. Mind you, they have good reasons for the tradeoff they're making, but it's a tradeoff that I am not interested in. |
19:04:55 | renesac | I'm proposing a new tradeof: where you are not sure if you are using the gc or not |
19:05:11 | renesac | but you work as if you were using it |
19:07:54 | Jehan_ | I honestly am not clear why people are so concerned about avoiding garbage collection. The application domains where it matters (and you still need automated memory management) are far and few between. |
19:08:04 | Jehan_ | s/clear/sure/ |
19:08:40 | renesac | people are usually traumatized by the random collecting pauses |
19:08:45 | renesac | and higher memory usage |
19:08:58 | * | Mat3 quit (Ping timeout: 240 seconds) |
19:09:27 | Jehan_ | That's because people haven't seen a lot of decent GC implementations. |
19:09:28 | renesac | both of which nimrod GC seems to reasonably avoid |
19:09:38 | Jehan_ | Nobody has ever worried about pauses in OCaml, for example. |
19:10:15 | Jehan_ | Which uses a generational minor collector with a bump allocator and an incremental major collector. |
19:10:38 | renesac | do we have some benchmark that taxes the GC implemented in nimrod? |
19:11:06 | Jehan_ | The reason why Java etc. are having problems is that they have to work with multiple threads on a huge shared heap, which is a much harder problem than single threaded GC. |
19:11:23 | Demos | The other thing is that the big GC'd languages allocate everything on the heap just because they can |
19:11:57 | Jehan_ | Well, Java also doesn't have value types, which is a huge problem. |
19:12:12 | Demos | true, but even C# programs tend to use the heap a whole lot |
19:12:18 | Jehan_ | 1000x1000 matrix of complex numbers in Java => GC nightmare. |
19:12:33 | Jehan_ | And honestly, for most applications it doesn't matter. |
19:12:44 | Demos | and then half the point of dynlangs is to have everything be the same size, so you need a lot of heap allocation there as well |
19:12:58 | Jehan_ | I mean, Github can use Ruby on Rails without performance problems. |
19:13:13 | Araq | citation needed :P |
19:13:28 | Araq | I bet they use many more servers than they need to |
19:14:14 | Jehan_ | Oh, probably. |
19:14:45 | Jehan_ | But the thing is, they're probably sacrificing an order of magnitude or two of performance and still manage. |
19:15:14 | Demos | not having a GC is good for when you need to be a foundational library with a very clean ABI. But I dont think rust is really aiming for that either, since lending across languages is gunna be a bad time |
19:15:23 | Demos | s/clean/clear |
19:15:47 | Araq | the bigger problem is that Ruby's inefficiency doesn't buy you anything, IMHO |
19:16:33 | Araq | IME dynamic typing simply is not productive |
19:16:40 | Jehan_ | Incidentally, Erlang has a pretty neat trick here. Start a new process (which is actually a light-weight thread) for a temporary computation, never garbage collect, then throw the entire heap of the process away. |
19:17:54 | Jehan_ | You can even use a special allocator for that situation that just increments the end of storage pointer. Its sort of like a poor man's region system. |
19:17:58 | Demos | I think dynamic types are appealing since it means all your functions are generic and you can have heterogeneous lists and stuff |
19:18:20 | Demos | well "generic" |
19:18:29 | renesac | it is also much easier to get started |
19:19:05 | Demos | there is a case to be made for dynamic types with run-time type checking annotations |
19:19:14 | Demos | kinda like nimrod's typeclasses but at runtime |
19:19:48 | Demos | I think pyret does that |
19:20:19 | Jehan_ | I think the reason why dynamic languages are popular is that (1) they're easier to learn for non-computer scientists and because (2) historically, type systems of mainstream languages have been very inexpressive. |
19:20:46 | Jehan_ | When until 2004 or so you had to use typecasts in Java all over the place, people reasonably wondered why not just use Python or Ruby anyway. |
19:20:53 | Araq | and (3) they are dead simple to implement |
19:21:09 | renesac | yeah |
19:21:11 | Araq | people tend to ignore this fact ;-) |
19:21:30 | Jehan_ | Oh, yeah. But generally at the price of performance. Writing an efficient compiler for a dynamically typed language is not easy. |
19:21:59 | Jehan_ | People like Mike Pall or Gilad Bracha do not grow on trees, alas. |
19:22:10 | renesac | Jehan_, that problem comes only after many years |
19:22:28 | Jehan_ | Which problem? |
19:22:32 | renesac | when the language is actually starting to get used in big projects, or cientific computation like python |
19:22:46 | renesac | worring about the interpreter performance |
19:23:23 | renesac | one don't makes a new dynamic language thinking on performance (except if you are a Julia dev) |
19:23:24 | Jehan_ | I had that problem back in the 1990s using Python. :) |
19:24:03 | renesac | you just want to make a new nice and simple language |
19:25:18 | Jehan_ | I'm not sure about that. Back when many of the popular interpreted languages were invented, their performance was actually a fairly big concern. |
19:25:47 | Jehan_ | I know that the Lua people spent quite a bit of effort on optimizing their VM. |
19:27:21 | Jehan_ | And I know that Python performance was a concern in the 1990s, too (because I talked to Guido van Rossum about it). |
19:27:27 | renesac | From the start, Lua was designed to be simple, small, |
19:27:27 | renesac | portable, fast, and easily embedded into applications |
19:27:34 | Araq | one good argument against dynamic languages is how well JITs work on them :P |
19:27:58 | Araq | people can't cope with dynamic languages, they write mostly static programs anyway |
19:28:34 | Araq | otherwise JITs wouldn't be as effective as they are |
19:29:06 | renesac | if writing generic programs in static languages was so simple... |
19:29:09 | Jehan_ | That argument doesn't really apply to tracing? |
19:29:11 | renesac | of course, it is changing now |
19:29:40 | njoejoe | as a ruby guy, i have to say that "easy to use" is #1. But concurrency is a big problem in ruby which is why I'm looking at nimrod. I have high hopes that dom96 can make an easy to use async jester. |
19:29:40 | Jehan_ | renesac: What I said above about inexpressive type systems in mainstream languages. |
19:29:48 | renesac | but any typesystem will allow only a subset of correct programs |
19:29:55 | renesac | yeah |
19:29:57 | Jehan_ | At least Go is now getting pushback for not having parametric polymorphism. |
19:30:58 | Araq | Jehan_: I'm not even sure that argument applies at all :-) |
19:31:04 | Araq | but it's an interesting thought |
19:31:34 | Jehan_ | An example of something that's still difficult to do in a typesafe fashion: The stack that YACC uses. |
19:32:47 | Matthias247 | I think the c++ way of using templates is a mess. However I think not having generics at all like Go is also not helpful |
19:32:56 | Araq | renesac: the "subset of valid programs" is a feature, not a problem |
19:33:39 | Jehan_ | C++ templates aren't a mess per se, it's just how they need to interact with the rest of a fairly complex language that makes them hard. |
19:33:55 | renesac | Araq, still, it may be a burden to get things running fast (but yes, you have much more help from the compiler) |
19:34:05 | Demos | C++'s templates are actually pretty sane, they can interact badly with other stuff and the syntax is utterly painful |
19:34:30 | renesac | C++ template programming was discovered, not designed |
19:34:31 | Matthias247 | the understandability of the code is ... |
19:34:32 | renesac | :P |
19:34:36 | Matthias247 | and the error messages :) |
19:34:51 | Araq | I believe in the principle of least power. Btw that's also why using recursion for iteration is wrong, IMHO. |
19:34:51 | Jehan_ | renesac: I'm also not concerned that once in a blue moon I may have to subvert the typesystem, as long as I don't have to do it all the time. |
19:34:53 | Matthias247 | renesac: yes, that's it |
19:35:18 | Matthias247 | if I see something like std::enable_if< if stop reading |
19:35:18 | Jehan_ | The covariance problems of Eiffel's generic types never bothered me, because you rarely encountered them in practice. |
19:35:51 | Araq | ha, to me they suggest that inheritance is deeply flawed no matter how you look at it. |
19:37:05 | Jehan_ | Doesn't have anything to do with inheritance per se, it's just that the combination of parametric polymorphism and subtyping polymorphism leads to weird problems. |
19:37:18 | * | Matthias247 still hopes for concepts coming for the rescue :) |
19:37:37 | Matthias247 | Nimrod and C# already do the right things there |
19:37:43 | Jehan_ | For example, F-bounded polymorphism is a solution, it just weirds out non-type-theorists. :0 |
19:37:55 | Araq | if I have A <: B and then neither array[A] <: array[B] nor array[B] <: array[A], that means it's a worthless relation. And this is the case for value based datatypes. |
19:38:39 | renesac | http://okmij.org/ftp/Computation/Subtyping/ <-- for more brokenness regarding inheritance |
19:38:46 | Araq | And no amount of type theory can patch over that. |
19:39:25 | Demos | it seems nimrod's typeclasses can get you 90% of the way there |
19:40:01 | Demos | not actually sure how F-bounded polymorphism is different... But I have not used a langauge with it in a long time |
19:40:32 | * | brson quit (Ping timeout: 246 seconds) |
19:41:36 | Araq | and indeed subtyping mostly works for references/pointers and so most languages don't have value based objects |
19:43:22 | Demos | I think parametric polymorphism is often simpler to understand than subtype in any case, probably eaiser to use correctly as well |
19:43:40 | Jehan_ | Araq: Not sure why it's necessary for A <:B to have F[A] <: F[B] (or the inverse). |
19:44:07 | Jehan_ | Not every function is a homomorphism. |
19:45:06 | Araq | it's an academic argument I guess, but <: doesn't compose well. |
19:45:57 | Jehan_ | Neither do a lot of important concurrency mechanisms, e.g. transactions. |
19:46:11 | Jehan_ | Composition is desirable, but not always attainable. |
19:46:39 | Jehan_ | Composability* |
19:48:31 | Araq | well it doesn't compose well and is quite complex at the same time without giving you much. Concurrency mechanisms on the other hand are at least immediately useful. |
19:48:55 | Araq | but ymmv of course |
19:49:13 | Jehan_ | Yeah. I think this is something where one can easily agree to disagree. :) |
19:57:48 | * | brson joined #nimrod |
19:57:48 | Jehan_ | By the way, if anybody is interested: http://codegolf.stackexchange.com/questions/26371/how-slow-is-python-really-part-ii |
19:58:43 | Jehan_ | I have a roughly 80x speedup in Nimrod without much effort (and without tricks or parallelizing code). |
20:05:42 | * | Jesin quit (Quit: Leaving) |
20:16:53 | * | gsingh93 quit (Ping timeout: 276 seconds) |
20:18:00 | * | menscrem quit (Quit: Page closed) |
20:21:09 | * | gsingh93 joined #nimrod |
20:22:55 | Matthias247 | C# 0.135s <-- congrats to all the guys wo are only measuring the startup times of VMs ;) |
20:25:12 | * | shodan45 joined #nimrod |
20:33:33 | EXetoC | -.- |
20:33:44 | EXetoC | the most important metric of course |
20:34:43 | njoejoe | Jehan_: are you going to post your solution on there? |
20:35:06 | Jehan_ | Nah, it's not good enough to be competitive without further iterations. |
20:35:51 | njoejoe | i think 80x is pretty good! :-) |
20:41:20 | dom96 | Not in comparison to C++'s 508x :O |
20:42:55 | Jehan_ | dom96: That has nothing to do with C++ and with the implementation using bit strings rather than vectors. |
20:42:59 | Jehan_ | https://gist.github.com/rbehrends/113d2759d36a1e7bb6c8 |
20:43:15 | Jehan_ | If I went the bitstring route, I could probably squeeze out similar performance. |
20:43:43 | Jehan_ | In the end, Nimrod compiles to C in a way that is essentially like native C for anything that doesn't require memory management. |
21:04:06 | dom96 | Jehan_: Oh yeah, I know. I'm not sure others would see it that way though. |
21:04:49 | Jehan_ | dom96: Probably. Most of these benchmark games aren't apple to apple comparisons, anyway. |
21:06:48 | dom96 | Jehan_: Although maybe it would be worth submitting it anyway, even if to advertise Nimrod a little bit. |
21:08:08 | Jehan_ | Not sure it's worth it. Too limited an audience. I think what Nimrod really needs for advertising is the infamous killer app/lib. |
21:08:59 | Jehan_ | The only nice part about my code is that I don't actually shy away from using a solid RNG rather than one with dubious properties for speed. |
21:09:18 | Jehan_ | I still get decent speed because I milk most of the bits out of each invocation. |
21:10:13 | njoejoe | Jehan_: yes, and I think that killer app/lib is jester with async so easy the programmer doesn't have to think about it |
21:11:02 | Jehan_ | Does it beat the alternatives? Genuine question, I don't have the first clue about web apps. |
21:13:54 | Matthias247 | depends. There are fast web frameworks out there. But most people like the really slow ones (php, ruby, python based) ;) |
21:14:18 | njoejoe | jester as it is now is very easy to use and very fast compared to alternatives, but it needs async (and maybe websocket dsl). |
21:14:37 | Jehan_ | ASP.NET? |
21:15:14 | Jehan_ | Again, don't know it, but would think it should be fast and have async support. |
21:15:42 | dom96 | Jehan_: What was Python's killer app/lib? |
21:15:55 | Matthias247 | don't know about current state of asp but .net in generel is capable of a quite good performance |
21:16:06 | Jehan_ | dom96: Waiting a couple of decades for adoption? :) |
21:16:21 | dom96 | heh |
21:16:22 | Jehan_ | Python's selling point was always that it was simple, clean, easy to teach. |
21:16:37 | Jehan_ | Once computers became powerful enough for interpreted languages, it became attractive. |
21:17:52 | * | Jesin joined #nimrod |
21:18:00 | njoejoe | I have no experience with ASP.NET but I highly doubt it is as nice to use and succinct as jester |
21:18:39 | dom96 | The biggest problem with ASP.NET IMO is that it's not cross platform. |
21:18:44 | dom96 | Mono doesn't really count. |
21:18:56 | Jehan_ | Why doesn't Mono count? |
21:20:29 | dom96 | Because it's always behind the Microsoft implementation and it also far slower as far as I know. |
21:21:07 | Jehan_ | It is still plenty fast. Source: I use Mono on and off. |
21:21:34 | Jehan_ | My main problem with Mono is that IF I run into a bug in the JIT compiler, it's extremely hard to deal with. |
21:21:48 | * | superfunc quit (Ping timeout: 240 seconds) |
21:21:52 | Jehan_ | That's incidentally why I'm using Nimrod, even though it's rough around the edges still. |
21:22:11 | Jehan_ | The implementation is simple enough that I can easily fix bugs myself if needed. |
21:22:20 | Demos | I thought python's killer feature was really good tools for doing library bindings |
21:22:30 | Matthias247 | I also tried one of my c# apps on linux with mono and it really run straightforward without problems. Was really suriprised |
21:22:36 | Jehan_ | When whining to Araq about them doesn't help. :) |
21:22:49 | Matthias247 | and a colleague is even using mono for doing cross-plattform mobile apps with xamarin |
21:23:22 | Jehan_ | And yeah, Mono is slower than Microsoft's implementation, but unless you're into high performance stuff, not really enough for it to matter. |
21:23:28 | dom96 | Perhaps I should reevaluate my stance on Mono. |
21:23:40 | Jehan_ | And you do get some pretty comprehensive batteries as part of the deal. |
21:24:43 | Matthias247 | that's the nicesting thing. there's probably no better standard library than the .net one |
21:24:46 | Demos | https://onedrive.live.com/redir?resid=BE38BDD0FF029113!20910&authkey=!AIpV1BaS3wgpM_Y&v=3&ithint=photo%2c.PNG |
21:24:54 | Demos | I did not even have to do anything to get that working! |
21:26:28 | Jehan_ | The big problem with Mono, as I said, is if you run into a bug. |
21:26:54 | Jehan_ | Not that it happens often, but if you do, you're kinda dependent on Xamarin to get it fixed. |
21:27:17 | dom96 | Demos: Visual Studio supports C debugging? :O |
21:27:33 | Demos | note that that was a nimrod source file! |
21:27:42 | Demos | with the correct line numbers |
21:27:47 | Jehan_ | The same goes in theory for the JVM, but Java has such a huge userbase that this problem is exceedingly rare. |
21:28:03 | Matthias247 | Jehan_: I even found one in microsofts implementation of the .net library :) Which didn't get fixed :( |
21:28:35 | njoejoe | Demos: what's that? breakpoints and watches of nimrod code? holy cow, that's amazing. do i have to run windows to get that? |
21:28:44 | Jehan_ | But in general I'm very cautious of languages that depend on JIT compilers, even if they are attractive. |
21:29:07 | Demos | well not really, but gdb is not that fun to use |
21:29:09 | Jehan_ | Trying to debug the code generation of a JIT compiler unless you're extremely familiar with it can be … painful. |
21:29:21 | Demos | just compile with --debuginfo --linedir:on and some stuff may work |
21:29:44 | Demos | it does not break on exceptions |
21:29:47 | Demos | which I think I can gix |
21:29:49 | Demos | *fix |
21:30:00 | Demos | where do we put compiler intrinsics in the standard lib? |
21:30:51 | EXetoC | oh, GDB debug information |
21:30:51 | OrionPK | Demos whattttt is that picture |
21:31:19 | Demos | that is me after attaching visual studio's debugger to a nimrod program |
21:31:33 | OrionPK | :D |
21:31:34 | Demos | being suprised that stuff works as well as it does |
21:31:36 | OrionPK | awesome |
21:31:42 | OrionPK | where's the syntax highlighting? :P |
21:31:52 | Demos | not installed..... |
21:32:17 | * | [1]Endy quit (Ping timeout: 276 seconds) |
21:32:39 | Demos | there is a syntax highlighting plugin on my github but it is much too complex. I tried to get the nimrod compiler to do the highlighting, which works but I think a regex would have been a better plan |
21:33:00 | Araq | I still think we should clearly say "ENDB is not maintained anymore and .injectStmt is superior anyway" |
21:33:13 | * | gsingh93_ joined #nimrod |
21:33:35 | Jehan_ | It isn't? |
21:33:47 | Araq | well barely |
21:34:00 | dom96 | what is injectStmt and why is it better? |
21:34:01 | Demos | I tried this because endb failed to work |
21:34:14 | Demos | where would I put code to emit an intrinsic on unhandled exceptions |
21:34:26 | Jehan_ | Not sure that injectStmt is really an adequate alternative. :) |
21:34:53 | Demos | well debugInfo, injectStmt, lineDir, and genMapping should do it |
21:36:12 | * | gsingh93 quit (Ping timeout: 245 seconds) |
21:36:59 | Jehan_ | Just with 90% less usability. :) |
21:37:11 | Araq | Jehan_: oh but it is. use the new -d:corruption, get the ID of the corrupted object, injectStmt some check and see where it's overwritten |
21:38:06 | Jehan_ | Well, if I need a debugger, I generally need more than just some information about the state of the program when it crashed. |
21:38:30 | Jehan_ | E.g., I may need to single step through a routine to see if it actually does what I think it does. |
21:38:39 | Demos | Jehan_: which you can do |
21:38:47 | Demos | as demonstrated by that screenshot |
21:39:14 | dom96 | endb is/was much better |
21:39:39 | Jehan_ | And inspect non-trivial data structures without lot of painfully complex GDB commands? |
21:39:51 | Araq | well my debugging has changed I think |
21:40:01 | Demos | speaking of, genmapping is broken |
21:40:06 | Jehan_ | Mind you, I rarely need that in Nimrod. |
21:40:08 | dom96 | Araq: Most people don't debug corruptions. |
21:40:10 | Demos | uses the old style c file names |
21:40:23 | Araq | I used to use breakpoints, now I don't. I only need watchpoints. |
21:40:29 | Jehan_ | Usually, once I have the stacktrace of a bug, it's obvious where I screwed up. |
21:40:36 | Demos | wait no it does not |
21:40:42 | Araq | and traditional debuggers suck at watchpoints |
21:41:20 | OrionPK | someone should write a blog post demonstrating how inject stmt is superior |
21:41:57 | Jehan_ | Speaking of potentially unsupported features, how experimental exactly is --symbolfiles:on? |
21:42:14 | Araq | it used to work and then zahary broke it and never fixed it |
21:42:37 | Jehan_ | Ouch, that's too bad. |
21:42:43 | Araq | spent 2 weeks implementing it and compile times were non existant anymore |
21:42:59 | Jehan_ | Nimrod really does slow down once you get beyond a couple hundred KLOC. |
21:43:16 | * | springbok joined #nimrod |
21:43:37 | Jehan_ | Yup, it was pretty amazing when I tried it (in 0.9.2, I think). |
21:46:15 | dom96 | damn. It seems my buffered recv implementation is incorrect. |
21:48:33 | dom96 | Araq: We need to figure out how to deal with exceptions in async code. |
21:49:02 | Jehan_ | What are you doing right now? |
21:49:18 | Demos | Oo {.emit: "__debugbreak();".} |
21:49:28 | Araq | last time I thought about it, it's not that hard to do with the C backend and impossible with the C++ backend ... |
21:49:37 | Demos | now I just need to figure how to unmangle the variable names |
21:49:56 | Araq | locals are not mangled if the compiler can avoid it, Demos |
21:50:03 | Demos | the usual thing to do is kill the entire process if an exception were to go across threads |
21:50:09 | dom96 | I think I figured out a way of transforming a try stmt inside an async proc into something which handles the error correctly without having to generate a second try stmt |
21:50:27 | Araq | I think that's not even possible |
21:50:32 | Demos | yeah, I noticed, but they are still a bit funny |
21:50:55 | dom96 | Jehan_: The try statement is currently unsupported in async procs because you cannot have a yield inside of a try statement in closure iterators. |
21:51:04 | Demos | and having the callstack display nimrod style decls would be nice |
21:51:19 | dom96 | Interestingly C# only recently started supporting exceptions in async procs too |
21:51:31 | Araq | Demos: just use writeStackTrace() at strategic places |
21:52:20 | Jehan_ | Araq: Does the compiler get tripped up by local names like linux? |
21:52:34 | Demos | yeah, I dont really want to parse that and then figure how to get VS to find it |
21:52:52 | Jehan_ | dom96: Gotcha. |
21:52:59 | dom96 | Araq: Would a single try statement be the end of the world? |
21:53:03 | Araq | Jehan_: not sure what you mean |
21:53:41 | Jehan_ | Some C compilers on Linux have -Dlinux= by default. Solaris has something similar. |
21:54:08 | Araq | dom96: no, go ahead please |
21:55:32 | Araq | Jehan_: oh yeah the preprocessor is bad for the non-mangled names we generate |
21:55:57 | * | gsingh93- joined #nimrod |
22:02:44 | * | brson quit (Quit: leaving) |
22:03:32 | * | springbok quit (Changing host) |
22:03:32 | * | springbok joined #nimrod |
22:34:32 | * | Skrylar joined #nimrod |
22:41:08 | * | gsingh93_ quit (Ping timeout: 246 seconds) |
22:41:14 | Matthias247 | dom96: they had exception support for async/await all the time. What is new is that you can now also await in catch blocks - but I think there aren't too many use cases for that |
22:42:10 | * | gsingh93 joined #nimrod |
22:43:13 | * | gsingh93_ joined #nimrod |
22:44:48 | dom96 | Matthias247: were you able to await inside of a 'try' previously? |
22:45:05 | * | xenagi joined #nimrod |
22:45:59 | Matthias247 | dom96: yes. I think that is the basic requirement. To fetch the optional exception value returned by a Task |
22:46:45 | Matthias247 | if the task returns a value then await normally finishes. If it returns an exception then await throws and it jumps in the associated catch block |
22:46:52 | Araq | Matthias247: interesting. I tried it in visual studio and the compiler prevented it. I must have been mistaken. |
22:47:55 | Matthias247 | I'll try again :) |
22:51:36 | * | gsingh93- quit (Remote host closed the connection) |
22:53:12 | Matthias247 | Araq, dom96: works just as expected: http://pastebin.com/SqSht0xc |
22:53:33 | * | gsingh93- joined #nimrod |
22:54:13 | Araq | Matthias247: for c# 4.0 ? |
22:54:29 | Matthias247 | Araq: 4.5 is required for async/await |
22:55:16 | Araq | oh yeah, right |
22:56:00 | * | meguli_ joined #nimrod |
22:57:13 | Skrylar | meep |
22:57:55 | Matthias247 | however I'm not sure if I find async/await a good thing. I worry about most people won't recognize that the state after the await can be totally mutated because any amount of other code runs in between - even on the same thread |
22:58:58 | * | meguli__ joined #nimrod |
23:00:18 | * | meguli_ quit (Ping timeout: 240 seconds) |
23:04:08 | * | meguli__ quit (Quit: Page closed) |
23:05:55 | Jehan_ | Matthias247: That's a general question of dealing with/avoiding race conditions on shared memory. |
23:07:18 | Matthias247 | Jehan_: the funny thing is that these are not the classical memory race conditions that you have with real multithreading. It's just an asynchronicity that you don't expect |
23:08:36 | Jehan_ | Underneath, it's still a data race. |
23:08:47 | Jehan_ | Well, to me at least it's a distinction without a difference. |
23:09:01 | Matthias247 | if you have a pure callback based API then the users expect that the callback is invoked at some other time and things can happen in between |
23:09:16 | Matthias247 | but that has other issues |
23:09:22 | * | darkf joined #nimrod |
23:10:18 | * | shodan45 quit (Quit: Konversation terminated!) |
23:24:19 | * | q66 quit (Quit: Leaving) |
23:32:24 | * | Jesin quit (Quit: Leaving) |
23:35:28 | Varriount | Meep |
23:37:17 | EXetoC | beep |
23:42:39 | * | superfunc joined #nimrod |
23:46:29 | Varriount | Solution to all multi-threading related problems - Don't use multiple threads |
23:48:04 | EXetoC | KISS |
23:49:18 | * | superfunc quit (Ping timeout: 240 seconds) |