| 00:00:29 | * | ics quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) | 
| 00:00:30 | * | io2 quit () | 
| 00:02:57 | * | gsingh93_ joined #nimrod | 
| 00:04:52 | * | flaviu joined #nimrod | 
| 00:07:33 | * | Trustable quit (Quit: Leaving) | 
| 00:33:03 | * | Jesin quit (Ping timeout: 240 seconds) | 
| 00:47:33 | * | Jesin joined #nimrod | 
| 00:51:44 | * | Skrylar joined #nimrod | 
| 00:56:03 | * | Skrylar quit (Ping timeout: 240 seconds) | 
| 01:11:31 | * | asterite quit (Quit: Leaving.) | 
| 01:21:28 | * | Demos joined #nimrod | 
| 01:24:09 | * | goobles quit (Quit: Page closed) | 
| 01:27:23 | * | saml_ joined #nimrod | 
| 01:39:20 | * | brson_ joined #nimrod | 
| 01:40:30 | * | brson quit (Read error: Connection reset by peer) | 
| 01:41:03 | * | asterite joined #nimrod | 
| 02:02:30 | reactormonk | which one was the last mac powerpc? https://github.com/Araq/Nimrod/issues/1193#issuecomment-47559377 | 
| 02:02:37 | * | brson_ quit (Quit: leaving) | 
| 02:03:12 | Demos | I /think/ the little powerbook G4s | 
| 02:03:19 | * | brson joined #nimrod | 
| 02:03:41 | Demos | but when apple only supports 10.9 I think it is insane to support PPC mac | 
| 02:03:50 | Demos | it is hardly "every mac user" | 
| 02:04:03 | * | superfunc joined #nimrod | 
| 02:12:41 | * | Nimrod_ joined #nimrod | 
| 02:12:59 | * | ics joined #nimrod | 
| 02:15:19 | * | Nimrod quit (Ping timeout: 248 seconds) | 
| 02:27:56 | * | nande quit (Read error: Connection reset by peer) | 
| 02:39:55 | * | saml_ quit (Quit: Leaving) | 
| 02:45:27 | * | brson quit (Quit: leaving) | 
| 02:50:17 | * | Jesin quit (Ping timeout: 252 seconds) | 
| 02:52:04 | * | Jesin joined #nimrod | 
| 03:23:56 | * | johnsoft quit (Ping timeout: 260 seconds) | 
| 03:28:21 | * | superfunc quit (Ping timeout: 272 seconds) | 
| 03:33:00 | flaviu | reactormonk, Demos: I replied to that fwi, with essentially the same arguments you gave | 
| 03:35:30 | Demos | yeah | 
| 03:37:12 | * | xtagon joined #nimrod | 
| 04:03:36 | * | johnsoft joined #nimrod | 
| 04:08:48 | * | Skrylar joined #nimrod | 
| 04:16:44 | * | Demos_ joined #nimrod | 
| 04:20:07 | * | Demos quit (Ping timeout: 248 seconds) | 
| 04:26:25 | * | ARCADIVS quit (Quit: WeeChat 0.4.3) | 
| 04:33:11 | * | asterite quit (Quit: Leaving.) | 
| 04:54:16 | * | kshlm joined #nimrod | 
| 04:55:57 | * | ics quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) | 
| 05:06:22 | * | flaviu quit (Ping timeout: 264 seconds) | 
| 05:33:32 | * | xtagon quit (Read error: Connection reset by peer) | 
| 05:35:04 | * | ics joined #nimrod | 
| 05:41:46 | * | Nimrod_ quit (Ping timeout: 264 seconds) | 
| 05:47:55 | * | hoverbear quit () | 
| 05:56:46 | * | Jesin quit (Ping timeout: 264 seconds) | 
| 05:58:03 | * | boydgreenfield joined #nimrod | 
| 06:38:44 | * | Jesin joined #nimrod | 
| 07:14:28 | * | io2 joined #nimrod | 
| 07:26:42 | * | zahary quit (Quit: Leaving.) | 
| 07:29:25 | * | boydgreenfield quit (Quit: boydgreenfield) | 
| 07:48:41 | * | ics quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) | 
| 07:50:40 | * | io2 quit () | 
| 07:55:07 | * | Trustable joined #nimrod | 
| 08:01:35 | * | BitPuffin quit (Ping timeout: 252 seconds) | 
| 08:05:14 | * | goobles joined #nimrod | 
| 08:05:20 | * | kunev joined #nimrod | 
| 08:14:20 | * | kemet joined #nimrod | 
| 08:14:27 | * | kemet quit (Remote host closed the connection) | 
| 08:14:50 | * | kemet joined #nimrod | 
| 08:19:11 | * | kemet quit (Remote host closed the connection) | 
| 08:35:24 | * | Demos_ quit (Read error: Connection reset by peer) | 
| 08:39:26 | * | BitPuffin joined #nimrod | 
| 10:43:16 | * | q66 joined #nimrod | 
| 11:05:14 | * | io2 joined #nimrod | 
| 11:22:57 | * | saml_ joined #nimrod | 
| 11:37:18 | * | kunev quit (Read error: Connection reset by peer) | 
| 11:38:17 | * | kunev joined #nimrod | 
| 11:41:10 | * | Skrylar quit (Ping timeout: 264 seconds) | 
| 11:41:21 | * | Skrylar joined #nimrod | 
| 12:13:31 | * | saml_ quit (Quit: Leaving) | 
| 12:19:05 | * | kunev quit (Quit: leaving) | 
| 12:20:18 | * | boDil_ joined #nimrod | 
| 12:21:04 | * | kunev joined #nimrod | 
| 12:26:06 | * | untitaker quit (Ping timeout: 255 seconds) | 
| 12:27:36 | * | io2 quit (Ping timeout: 260 seconds) | 
| 12:30:55 | * | untitaker joined #nimrod | 
| 12:56:36 | * | BitPuffin quit (Quit: See you on the dark side of the moon!) | 
| 13:04:30 | boDil_ | http://www.drdobbs.com/architecture-and-design/the-best-of-the-first-half/240168580 | 
| 13:04:33 | boDil_ | Good job, Araq | 
| 13:13:33 | * | kshlm quit (Ping timeout: 240 seconds) | 
| 13:26:42 | * | io2 joined #nimrod | 
| 13:27:29 | * | boDil_ quit (Quit: Page closed) | 
| 13:29:32 | reactormonk | Anyone got a macbook here? | 
| 13:42:48 | * | OrionPK quit (Read error: Connection reset by peer) | 
| 13:43:05 | * | OrionPK joined #nimrod | 
| 13:57:11 | * | johnsoft quit (Ping timeout: 248 seconds) | 
| 13:58:01 | * | kshlm joined #nimrod | 
| 13:58:59 | * | kshlm quit (Client Quit) | 
| 14:00:36 | * | johnsoft joined #nimrod | 
| 14:00:39 | * | johnsoft quit (Read error: Connection reset by peer) | 
| 14:02:40 | OrionPK | yeah i have a macbook | 
| 14:07:21 | * | darkf quit (Quit: Leaving) | 
| 14:07:33 | def- | is it always necessary to manually close a file? | 
| 14:12:48 | * | BitPuffin joined #nimrod | 
| 14:14:36 | * | hoverbear joined #nimrod | 
| 14:26:59 | * | johnsoft joined #nimrod | 
| 14:32:20 | flyx | reactormonk: I have one too | 
| 14:59:30 | * | asterite joined #nimrod | 
| 15:01:15 | * | nande joined #nimrod | 
| 15:02:02 | * | hoverbear quit () | 
| 15:09:02 | * | hoverbear joined #nimrod | 
| 15:09:39 | * | hoverbear quit (Client Quit) | 
| 15:09:56 | reactormonk | could you test the corresponding bug? | 
| 15:13:09 | * | kshlm joined #nimrod | 
| 15:16:11 | flyx | reactormonk: #1193? you'd need an iBook or PowerBook for that, MacBooks already have Intel | 
| 15:16:53 | flyx | reactormonk: anyway, I have a PowerBook lying around, I'll see if I still boots ;) | 
| 15:34:15 | * | asterite left #nimrod (#nimrod) | 
| 15:34:49 | reactormonk | flyx, nah, it's apparently mac too | 
| 15:36:53 | * | Jesin quit (Quit: Leaving) | 
| 15:37:02 | flyx | ah okay, trying to reproduce it here | 
| 15:40:11 | flyx | reactormonk: aye, the build of koch fails with the error quoted in the issue (iMac, OSX 10.9) | 
| 15:52:49 | * | boydgreenfield joined #nimrod | 
| 15:54:26 | * | gkoller joined #nimrod | 
| 15:55:20 | * | kunev quit (Quit: leaving) | 
| 16:09:44 | Skrylar | def-: its usually a good idea to close files early, even if gcs/destructors do it for you | 
| 16:10:05 | Skrylar | due to file locks / write caches and whatnots | 
| 16:11:35 | * | gkoller quit (Ping timeout: 248 seconds) | 
| 16:24:02 | * | ics joined #nimrod | 
| 16:27:11 | * | Puffin joined #nimrod | 
| 16:29:03 | * | BitPuffin quit (Ping timeout: 240 seconds) | 
| 16:36:15 | flyx | so are there any jobs for Nimrod coders yet? I could do with one ^^ | 
| 16:39:00 | * | kunev joined #nimrod | 
| 16:43:08 | * | Demos joined #nimrod | 
| 17:00:42 | def- | Skrylar: i was hoping that files get closed when they go out of scope, or maybe python's "with" | 
| 17:00:48 | * | Demos_ joined #nimrod | 
| 17:01:06 | def- | but from my experiments that's not happening | 
| 17:04:11 | def- | i can add a destructor that calls close, which seems to work. hm | 
| 17:04:31 | Skrylar | well TFile is just a thin wrapper over FILE* | 
| 17:04:34 | * | Demos quit (Ping timeout: 264 seconds) | 
| 17:04:56 | Skrylar | def-: however its possible to do that, using destructors, or use a template that closes the handle | 
| 17:05:30 | def- | Skrylar: I'm also wondering whether that should be the default | 
| 17:05:56 | Skrylar | def-: i donno. destructors are wiggly | 
| 17:06:07 | Skrylar | what if i want to pass a file handle to another struct? | 
| 17:06:23 | Skrylar | the destructor runs for my local one, closes the file, now the remote handle is invalid | 
| 17:06:32 | * | hoverbear joined #nimrod | 
| 17:09:22 | * | _dLog is now known as dLog | 
| 17:10:20 | Araq | Skrylar: which is why I plan to requires a particular form of escape analysis for destructors | 
| 17:10:42 | Araq | destructors make no sense without escape analysis IMHO | 
| 17:11:16 | Araq | you end up with shitty "compiler can elide copy assignments" rules otherwise | 
| 17:11:42 | Skrylar | shitty compilers you say | 
| 17:11:55 | Skrylar | clearly you just need to write a FORTH compiler and base nimrod on top of that | 
| 17:11:56 | * | Skrylar ducks | 
| 17:12:45 | Skrylar | actually i think i've seen a few of those and they aren't bad; though they have a lot of stack twaddling to them | 
| 17:13:13 | def- | oh, and "with" is a keyword in Nimrod, but is it actually used? | 
| 17:14:14 | * | boydgreenfield quit (Quit: boydgreenfield) | 
| 17:17:45 | * | icebattle joined #nimrod | 
| 17:18:34 | * | boydgreenfield joined #nimrod | 
| 17:21:01 | Araq | def-: kind of | 
| 17:21:27 | Araq | type foo = distinct int with ...  is already parsed iirc | 
| 17:21:58 | * | Matthias247 joined #nimrod | 
| 17:22:02 | def- | what does it do, Araq? | 
| 17:22:44 | Araq | type foo = distinct int with `.`, `==` | 
| 17:22:59 | def- | ah, that's cool | 
| 17:23:18 | def- | no more 20 borrows | 
| 17:24:43 | Araq | also 'with' was planned as a replacement for .push | 
| 17:24:56 | Araq | with overflowChecks=on: | 
| 17:25:01 | Araq | echo a+ b | 
| 17:25:20 | Araq | dunno if we'll do it though | 
| 17:25:43 | Araq | .push has the advantage that it doesn't lead to excessive nesting | 
| 17:37:42 | * | brson joined #nimrod | 
| 17:41:52 | Demos_ | can you say distinct int with * | 
| 17:42:01 | Demos_ | or maybe with all | 
| 17:43:53 | Araq | type foo = int # yay, now works "with all" | 
| 17:44:17 | Demos_ | but there is an implicit conversion then | 
| 17:44:23 | Araq | "with all" makes no sense | 
| 17:44:35 | Araq | dollar*dollar != dollar | 
| 17:44:48 | Demos_ | yeah I spose | 
| 17:44:52 | Araq | TaintedString has similar typing rules | 
| 17:45:44 | Araq | there is no "implicit conversion" either, it's simply a type alias | 
| 17:57:11 | * | hoverbea_ joined #nimrod | 
| 18:00:54 | * | hoverbear quit (Ping timeout: 255 seconds) | 
| 18:05:46 | * | Puffin quit (Quit: See you on the dark side of the moon!) | 
| 18:06:03 | * | BitPuffin joined #nimrod | 
| 18:14:18 | dom96 | hi] | 
| 18:46:42 | BlameStross | I'm about to hack together a light-weight graph library. Is there any work done on variable-at-runtime-sized matrices? | 
| 18:47:20 | BlameStross | or do I get to hack that together myself? | 
| 18:47:31 | dom96 | BlameStross: What about bignum? | 
| 18:47:55 | BlameStross | It is working minus division. Right now I am working on the stuff I am hypothetically paid for | 
| 18:48:44 | BlameStross | BigNum was for fun. I am writing a network topology simulation and analysis tool for research data for an academic paper. | 
| 18:49:11 | BlameStross | tis a one-off. I've already written it in python but I need it to go faster. | 
| 18:49:19 | Araq | BlameStross: I had a graph library based on sparse integer bit sets (comparable to TIntSet), but lost the code | 
| 18:49:55 | Araq | but TIntSet might be good enough already, depending on what you need to do with the graphs | 
| 18:50:54 | BlameStross | Araq: I'm not doing anything that crazy, just add-node, add-edge, find inter-node distances. | 
| 18:51:38 | Araq | proc addEdge(a, b: int) = s.incl(a*N + b) # where N is the number of nodes in the graph | 
| 18:52:22 | BlameStross | the way I am familiar with on how to implement this is as 2 half-matrices of edges, one as transpose. | 
| 18:53:38 | BlameStross | Araq: That lets me store the presence of an edge. I'd need another structure to store edge weights anyway. | 
| 18:53:52 | Araq | well that was my question | 
| 18:55:16 | flyx | is there anything I can do to help fix #903? | 
| 18:55:24 | flyx | that's kind of blocking me right now | 
| 18:55:29 | BlameStross | Honestly I am going to go and make something needlessly ugly out of sequences because I think it will code up faster | 
| 19:01:04 | * | _blank_ joined #nimrod | 
| 19:01:17 | * | superfunc joined #nimrod | 
| 19:01:23 | superfunc | sup everyone | 
| 19:02:09 | Demos_ | BlameStross, for runtime sized matrices store the width and height and then a seq of width*height length | 
| 19:02:23 | Araq | flyx: I'm working on these VM bugs right now | 
| 19:02:31 | flyx | \o/ | 
| 19:02:48 | * | flyx fetches some Coffee for Araq | 
| 19:02:59 | * | _blank_ left #nimrod (#nimrod) | 
| 19:03:08 | BlameStross | Demos_: yeah, that is the plan right now. I might do something a little more complex because I am only doing regular graphs not digraphs so I only need half of a matrix | 
| 19:04:53 | Araq | flyx: however, usually global .compileTime variables suggest your macro doesn't capture enough | 
| 19:04:59 | Araq | mymacro: | 
| 19:05:05 | Araq | # .. whole module here | 
| 19:05:25 | Araq | is possible too | 
| 19:05:37 | * | _blank_ joined #nimrod | 
| 19:06:17 | * | _blank_ quit (Client Quit) | 
| 19:06:21 | BlameStross | Demos_: the fun bit is changing the matrix size without clobbering old edges | 
| 19:06:35 | flyx | Araq: that's what I'm doing in emerald. but I need some cache across macro calls for complex stuff like template inheritance | 
| 19:06:36 | * | _blank_ joined #nimrod | 
| 19:06:53 | Demos_ | what is emerald? | 
| 19:07:11 | flyx | Demos_: https://github.com/flyx/emerald | 
| 19:07:16 | * | _blank_ quit (Client Quit) | 
| 19:07:26 | Demos_ | neato | 
| 19:07:55 | Demos_ | if it makes you feel better I use macros to generate honest-to-god runtime global variables :D | 
| 19:08:16 | * | Jehan_ joined #nimrod | 
| 19:08:33 | flyx | ^^ | 
| 19:09:45 | * | Skrylar takes Demos_'s coding privileges and puts them in tupperware for using globals | 
| 19:09:56 | Demos_ | I do not access them directly in my code | 
| 19:10:10 | Demos_ | like you need a handle to access them | 
| 19:10:20 | Demos_ | the idea came to me in a dreem | 
| 19:10:30 | * | _blank_ joined #nimrod | 
| 19:10:32 | Demos_ | s/dreem/dream/ | 
| 19:11:10 | flyx | one problem I discovered is that the user can accidentally hit the name of any variable you introduce in a macro | 
| 19:11:29 | Araq | gensym? | 
| 19:11:44 | Araq | bindSym? | 
| 19:12:18 | * | _blank_ quit (Client Quit) | 
| 19:12:29 | flyx | Araq: ah, didn't know about that, thanks | 
| 19:12:44 | * | flyx goes updating his code | 
| 19:12:58 | Skrylar | sweeeeet | 
| 19:13:09 | Skrylar | you can tell vim to repeat things on lines | 
| 19:13:24 | * | Skrylar just read about using ranges to tell it to run macros | 
| 19:13:52 | fowl | BlameStross, did yyou see this https://github.com/blamestross/nimrod-vectors/issues/1 | 
| 19:18:08 | Demos_ | static[int] does "work" in types, but it screws up overload resolution | 
| 19:18:23 | Demos_ | vectors of different sizes do not like to be considered seperate for OR | 
| 19:18:42 | fowl | well its not usable there yet | 
| 19:18:43 | flyx | I don't seem to be able to use bindSym to bind to an identifier I created with genSym | 
| 19:19:25 | BlameStross | fowl: yep. I made some changes based on it (mostly adding proc `[]=`) | 
| 19:21:08 | Araq | flyx: nor should you | 
| 19:21:22 | Araq | you use gensym and then that's it | 
| 19:21:28 | BlameStross | fowl: While it would succeed in allowing type checking work for Vector dimensions. It is neither intuitive nor simple. I think it would be better just to wait for static[int] to work. | 
| 19:23:22 | BlameStross | At least for the work I am doing with this library, I am never mixing dimension count in the same program. Even if I do mess that up, it will be caught fast at runtime and have a useful error. | 
| 19:24:47 | flyx | Araq: hm. so I just copy the returned PNimrodNode everywhere? | 
| 19:25:10 | Araq | flyx: yes | 
| 19:25:27 | Araq | you don't even have to copy it (I hope) | 
| 19:26:03 | flyx | ah, okay. yes, that should be usable. and it's guaranteed to work even if I use it in immediate macros where the identifiers are not resolved yet? | 
| 19:26:30 | Araq | should work, yes | 
| 19:26:41 | flyx | great | 
| 19:26:42 | fowl | BlameStross, whats not intuitive | 
| 19:28:09 | BlameStross | so, to make a vector in R*k space, I need to define a custom ordinal type which is valid for the the range 0..k-1 then use that type as the generic argument for every vector | 
| 19:28:47 | fowl | BlameStross, where in my example do you see generic arguments | 
| 19:29:52 | * | goobles quit (Ping timeout: 246 seconds) | 
| 19:30:32 | fowl | nowhere, because you see vec[0], not vec[0..1][0] | 
| 19:32:01 | BlameStross | no but you have: | 
| 19:32:01 | BlameStross | Vector[0..20] = VecA+VecB | 
| 19:32:31 | * | nande quit (Read error: Connection reset by peer) | 
| 19:32:45 | BlameStross | well, var vec : Vector[0.20] = VecA+VecB | 
| 19:33:15 | fowl | BlameStross, a and b are vectors of 21/ | 
| 19:33:16 | fowl | ? | 
| 19:33:28 | * | boydgreenfield quit (Quit: boydgreenfield) | 
| 19:33:39 | BlameStross | fowl: sure | 
| 19:34:01 | fowl | var vec = veca+vecb | 
| 19:34:40 | BlameStross | I thought you had to declare type in the var statement | 
| 19:35:00 | fowl | BlameStross, well i do it in my example "var v = initVector(2)" | 
| 19:35:20 | Araq | omg, where did you get that idea, BlameStross ? | 
| 19:35:28 | BlameStross | the tutorial? | 
| 19:35:35 | BlameStross | that I skimmed | 
| 19:35:43 | BlameStross | 3-4 days ago | 
| 19:36:22 | BlameStross | and there it is in examples 2/3 of the way down | 
| 19:36:22 | Araq | Since the compiler knows that | 
| 19:36:24 | BlameStross | huh | 
| 19:36:24 | Araq | ``readLine`` returns a string, you can leave out the type in the declaration | 
| 19:36:25 | Araq | (this is called `local type inference`:idx:). So this will work too: | 
| 19:36:27 | Araq | .. code-block:: Nimrod | 
| 19:36:28 | Araq | var name = readLine(stdin) | 
| 19:36:41 | BlameStross | well, that simplifies a lot of code | 
| 19:37:15 | Araq | well most examples lack the type annotation in tutorial 1 | 
| 19:37:35 | Araq | your skimming capabilities really suck ;-) | 
| 19:37:40 | BlameStross | fowl: you proposal is now reasonable. I thought I'd have to be hauling around the complex type-def for every variable declaration only. | 
| 19:38:02 | BlameStross | Araq: they do. I just start writing and reading compile errors | 
| 19:38:16 | BlameStross | which I will note are actually pretty good | 
| 19:39:15 | * | nande joined #nimrod | 
| 19:39:23 | Araq | ok ... I guess everybody learns differently | 
| 19:39:26 | fowl | BlameStross, nah you will only need the vector's range in type defs/func params, though you probably want to define Vector2, Vector3, etc right away | 
| 19:41:18 | * | Demos_ quit (Read error: Connection reset by peer) | 
| 19:41:39 | BlameStross | lol. Essentially I really on learn by doing. So I just start doing and survive the inital horrible resistance. | 
| 19:41:48 | BlameStross | s/on/only | 
| 19:42:21 | BlameStross | the path to understanding is paved with stupid mistakes. | 
| 19:45:15 | BlameStross | I'll have some free-time to re-write tomorrow. | 
| 19:46:33 | * | boydgreenfield joined #nimrod | 
| 19:47:57 | * | Demos joined #nimrod | 
| 19:48:24 | * | Demos_ joined #nimrod | 
| 19:48:26 | * | superfunc quit (Quit: leaving) | 
| 19:50:33 | * | nande quit (Remote host closed the connection) | 
| 19:53:11 | * | Demos quit (Ping timeout: 272 seconds) | 
| 19:56:52 | BlameStross | fowl: proc initVector* (len:static[int]): auto = | 
| 19:57:09 | BlameStross | how does type "auto" work here? | 
| 19:59:03 | * | Jehan_ quit (Remote host closed the connection) | 
| 19:59:07 | reactormonk | BlameStross, short version of "the compiler can figure that out for me" | 
| 19:59:12 | * | nande joined #nimrod | 
| 20:00:05 | BlameStross | So If I instantiate 2d 3d and 22d vectors, it re-implements the proc for each type? | 
| 20:00:11 | BlameStross | at compile time | 
| 20:04:28 | * | Varriount|Mobile joined #nimrod | 
| 20:04:57 | * | Jehan_ joined #nimrod | 
| 20:09:42 | reactormonk | BlameStross, probably | 
| 20:10:25 | boydgreenfield | threadpool question: How can I limit the number of threads that get spawned if I want to iterate through a loop of, e.g., input files? `setMaxPoolSize(2)` doesn’t appear to work / do anything for me | 
| 20:10:41 | Araq | BlameStross: yes but with a bit of luck the linker merges them ... *cough* | 
| 20:11:37 | * | dLog is now known as spike021 | 
| 20:12:07 | * | spike021 is now known as dLog | 
| 20:13:41 | Araq | boydgreenfield: how many cores do you have? | 
| 20:14:00 | boydgreenfield | Araq: 4 (8 with hyperthreading) | 
| 20:15:15 | boydgreenfield | Araq: Interesting — it uses up to 8, but then stops unless I said setMaxPoolSize(>8) | 
| 20:15:29 | Araq | well yes | 
| 20:15:48 | Araq | it creates 8 threads in setup() already | 
| 20:15:53 | Araq | and then uses these | 
| 20:16:00 | boydgreenfield | Araq: Wait nevermind, it just uses up to 8 regardless. Perhaps I’ve missed it – how should I be changing the max and min threads? | 
| 20:16:10 | Araq | you shouldn't | 
| 20:16:37 | Araq | it's adapts to your current CPU load | 
| 20:16:47 | boydgreenfield | Araq: So there’s no way to use less than all my cores in a pattern like `for input_file in input_files: spawn process_file(input_file)` | 
| 20:17:02 | Araq | yeah. for now. | 
| 20:17:19 | Araq | we can make it respect maxPoolSize properly | 
| 20:17:38 | Araq | it didn't occur to me that you like to throttle your CPU :P | 
| 20:17:40 | boydgreenfield | Araq: Got it. I guess I just kind of assumed setMaxPoolSize was being used and didn’t look further. | 
| 20:17:52 | Araq | well it is used | 
| 20:18:09 | Araq | but only to determine if it's allowed to create *more* threads ;-) | 
| 20:18:23 | boydgreenfield | Araq: More than the number of cores? | 
| 20:18:30 | Araq | yep | 
| 20:19:36 | boydgreenfield | Araq: What’s the syntax for that? I’m hitting a hard limit at 8 (cores w/ hyperthreading), using setMaxPoolSize(16) and then for i in 0.. <16: spawn X does two sets of 8, for example. | 
| 20:19:54 | * | nande quit (Read error: Connection reset by peer) | 
| 20:20:06 | boydgreenfield | Araq: Note that this probably isn’t a problem for my use case… just trying to understand what controls I *do* have available | 
| 20:20:37 | Araq | well *max* pool size still means the thread pool can do what it wants within this limit | 
| 20:21:11 | boydgreenfield | Araq: Actually, no it is a problem now that I think about it since I’m memory bound and may only want to use, e.g., two threads w/ 10GB or RAM each (on a machine w/ 16 cores) | 
| 20:21:28 | boydgreenfield | Can the user set the max? | 
| 20:23:13 | boydgreenfield | (Ah now looking through, it appears not) | 
| 20:23:32 | Araq | well please make a PR | 
| 20:23:45 | Araq | I'm not sure what you really need | 
| 20:24:03 | Araq | looks like people need to call setup() on their own then | 
| 20:24:21 | boydgreenfield | Will do – ya I’m forking now for that functionality. | 
| 20:24:27 | Araq | or we go the Go way and have some shitty environment variable | 
| 20:24:36 | Araq | nah ... | 
| 20:25:59 | boydgreenfield | The current syntax is elegant — but some additional control might be warranted for certain use cases (then again, maybe one should just be using a finer-grained control in that case vs. spawn) | 
| 20:26:04 | boydgreenfield | Thanks for the help. | 
| 20:26:21 | * | nande joined #nimrod | 
| 20:26:41 | Araq | well channels + createThread still work and can be faster | 
| 20:27:00 | Araq | especially since you already know you want exactly 2 threads | 
| 20:30:03 | * | nande quit (Read error: Connection reset by peer) | 
| 20:30:15 | boydgreenfield | Araq: I’ll take a look there. I’m kind of fumbling around as I didn’t have a good sense of what concurrency options to start with and where the design was headed. | 
| 20:33:17 | boydgreenfield | In the meantime, just writing a setup(min_threads, max_threads) proc and calling it manually works well. | 
| 20:33:24 | * | hoverbea_ quit (Ping timeout: 260 seconds) | 
| 20:34:20 | * | MayurYa joined #nimrod | 
| 20:34:20 | * | MayurYa quit (Changing host) | 
| 20:34:20 | * | MayurYa joined #nimrod | 
| 20:34:23 | Araq | Jehan_: I still don't know how your withHeap really works | 
| 20:34:46 | Jehan_ | Araq: In what regard? | 
| 20:34:56 | Araq | IF withHeap does the locking, that's so coarse that it can't parallelize anything | 
| 20:35:30 | Araq | but if it doesn't do that we need additional checks in newObj | 
| 20:35:55 | Jehan_ | First, as I said, it's a pretty barebones model. | 
| 20:36:29 | * | nande joined #nimrod | 
| 20:36:42 | Jehan_ | Second, without compiler support (I think, effects *might* work), you can't do readonly access. | 
| 20:37:09 | Jehan_ | Third, without compiler support, it's difficult to optimize some important cases. | 
| 20:37:24 | Jehan_ | That said, yes, you can parallelize quite a bit. | 
| 20:37:53 | Jehan_ | As with any form of locking, it's crucial to not hold the lock for long. | 
| 20:38:32 | Araq | well compiler support is a non-issue | 
| 20:38:41 | Jehan_ | You also want multiple shared heaps for such a setup, so that there's very little blocking. | 
| 20:39:05 | Jehan_ | Araq: I wasn't proposing this as a long-term solution, but as a relatively quick and easy hack. | 
| 20:39:36 | Jehan_ | If you want something long-term, it needs to be surrounded by quite a bit more language support, and then it stops being quick and easy. | 
| 20:39:43 | Araq | well spawn+FlowVar already enjoys compiler support | 
| 20:40:41 | Araq | interestingly it is easier than lambda lifting. but then everything is. | 
| 20:42:24 | Jehan_ | The most important optimizations would be (1) to facilitate read-only access, i.e. guaranteeing that a section of code doesn't modify the heap and (2) optimized deep-copying for transfering data between heaps for common use cases (int, float, string parameters in particular, as well as small tuples of these). | 
| 20:42:41 | Jehan_ | You'd also want language support to make the basic model more expressive. | 
| 20:42:54 | Jehan_ | See, e.g., what Eiffel does with SCOOP. | 
| 20:43:08 | Araq | Jehan_: can you outline some usages with your withHeap? | 
| 20:43:26 | Araq | I still don't see how it'll be used in practice and produce a speedup | 
| 20:44:40 | Jehan_ | Sure. For example, a shared hash table. Take N heaps, map buckets randomly to heaps. | 
| 20:45:17 | Jehan_ | Well, pseudo-randomly, you want the mapping to be reproducible or you'll never find stuff again. :) | 
| 20:46:58 | * | hoverbear joined #nimrod | 
| 20:47:43 | * | nande quit (Remote host closed the connection) | 
| 20:49:26 | Araq | well thinking about it ... that's unsafe like hell | 
| 20:49:37 | Jehan_ | Why would it be unsafe? | 
| 20:50:06 | Jehan_ | Or rather, unsafe can mean a lot of things. What kind of unsafe do you mean? | 
| 20:50:25 | Araq | you transform allocShared into 'new' with this withHeap primitive | 
| 20:50:40 | Jehan_ | No. | 
| 20:50:53 | Jehan_ | I don't. I'm using the old new. | 
| 20:50:57 | Araq | yes you do | 
| 20:51:01 | Araq | not literally | 
| 20:51:07 | Jehan_ | I'm temporarily switching out the thread-local heap. | 
| 20:51:14 | Araq | exactly | 
| 20:51:20 | Jehan_ | So that all new requests go to the shared heap. | 
| 20:51:34 | Jehan_ | the withHeap() interface prevents any points to the shared heap from escaping. | 
| 20:51:34 | Araq | and the types remain 'ref', 'string', 'seq' | 
| 20:51:46 | Jehan_ | Because all data that goes back and forth HAS to be serialized/deserialized. | 
| 20:51:52 | Araq | new(node) | 
| 20:51:55 | Jehan_ | pointers* | 
| 20:52:04 | Araq | withHeap(...): | 
| 20:52:13 | Araq | new(foreign) | 
| 20:52:17 | Jehan_ | Yes? | 
| 20:52:23 | Araq | foreign.next = node #oops | 
| 20:52:27 | Jehan_ | Can't. | 
| 20:52:36 | Jehan_ | withHeap calls a procvar. | 
| 20:52:48 | Araq | ah, ok | 
| 20:52:56 | Jehan_ | I was very specific there. | 
| 20:52:59 | Jehan_ | :) | 
| 20:53:07 | Jehan_ | Also, again why I called it pretty barebones. | 
| 20:53:15 | Jehan_ | You CAN do it as described above. | 
| 20:53:34 | Jehan_ | If you are careful with not capturing the environment. needs compiler support, though. | 
| 20:53:41 | Araq | in fact | 
| 20:54:01 | * | nande joined #nimrod | 
| 20:54:02 | Jehan_ | You can do with a closure that does capture-copying, though. | 
| 20:54:12 | Araq | we already have a write barrier for  'foreign.next = node' | 
| 20:54:44 | Araq | we could an assert that the pointers must come from the same heap | 
| 20:54:50 | Araq | *could add | 
| 20:54:59 | Jehan_ | But that's what I meant by needing optimization above in that the proposed quick-and-easy stuff simply serializes all data passing back and forth. | 
| 20:55:17 | Jehan_ | THAT can be inefficient as hell if you do it a lot for simple types. | 
| 20:55:49 | Jehan_ | But with compiler support, you can optimize a lot of common cases (ints, floats, strings, and composite types of simple types). | 
| 20:56:13 | Araq | yeah an optimized 'deepCopy' is in the works already | 
| 20:56:18 | Jehan_ | Another concern is the per-heap overhead. | 
| 20:57:53 | Jehan_ | I've been working on compiler optimization for reference counting (as an alternative to deferred RC) myself, because that can co-exist with stuff like tcmalloc or other implementations with a single physical heap as long as the "logical" heaps don't get mixed up. | 
| 20:58:48 | Araq | doing better than deferred RC is very very hard | 
| 20:59:06 | Araq | doing it statically, I mean | 
| 20:59:12 | Jehan_ | I know. | 
| 20:59:39 | Jehan_ | The next step would be to be able to mix and match. | 
| 21:00:29 | * | nande quit (Remote host closed the connection) | 
| 21:01:07 | * | MayurYa quit (Quit: *nix server crashed w/ bof) | 
| 21:01:20 | Araq | btw the planned way to do the "shared" hash table is: | 
| 21:01:24 | Araq | parallel: | 
| 21:01:39 | Araq | for i in 0.. <N: | 
| 21:02:17 | Araq | output[i] = spawn processPartialData(input[i]) | 
| 21:02:35 | Araq | merge(output[0.. <N]) | 
| 21:03:11 | * | boydgreenfield quit (Quit: boydgreenfield) | 
| 21:03:19 | Jehan_ | Inefficient or not good enough for a lot of important problems. | 
| 21:03:55 | Araq | pff you can say that to every solution :P | 
| 21:04:21 | Jehan_ | The problem is that very frequently (many parallel graph search algorithms in particular) you need all threads to be able to access ALL the partial hash tables. | 
| 21:04:35 | Jehan_ | Or you're going to do a lot of duplicate work. | 
| 21:04:52 | Jehan_ | What you propose is fine for functional type stuff. | 
| 21:05:10 | Jehan_ | But not for where the shared hash table is basically one big data accumulator. | 
| 21:05:40 | def- | I solved a few Rosetta Code tasks in Nimrod today, if anyone wants to take a look or has suggestions: http://rosettacode.org/wiki/Special:Contributions/Def | 
| 21:06:48 | * | nande joined #nimrod | 
| 21:08:33 | * | kunev quit (Ping timeout: 240 seconds) | 
| 21:12:13 | Jehan_ | "parallel:" is great for stuff that is basically about vectorization. But it's not so great for divide-and-conquer, traversal of irregular spaces, or other things that can't naturally be subdivided in subproblems of roughly equal size. | 
| 21:15:35 | Araq | I think it's good enough for divide-and-conquer too | 
| 21:16:25 | Araq | graph traversal is hard but we keep forgetting about cast[foreign ptr T](myref) | 
| 21:16:40 | Jehan_ | How would you implement factorial(1000000)? | 
| 21:17:03 | * | boydgreenfield joined #nimrod | 
| 21:17:19 | Araq | luckily I don't have to care about factorial(10...)  :-) | 
| 21:17:43 | Jehan_ | Heh. It's a simple example of divide and conquer. | 
| 21:17:43 | Araq | divide-and-conquer is 'sort' for me ;-) | 
| 21:19:05 | Jehan_ | factorial(n) = product(1..n) with product(m..n) = product(m, (n+m) div 2) * product((n+m) div 2 + 1, n) (stop recursing for n-m being sufficiently small). | 
| 21:19:29 | Araq | that's not the problem here | 
| 21:19:37 | Araq | the real problem is memory management | 
| 21:19:52 | * | hoverbea_ joined #nimrod | 
| 21:19:59 | Araq | for 'fac' (bignums) you get the problems you're thinking of | 
| 21:20:08 | Jehan_ | Not a problem, you can copy arguments and results between thread-local heaps. | 
| 21:20:23 | Araq | for 'sort' you don't need memory management (swap) and so it's fine | 
| 21:20:46 | * | hoverbear quit (Ping timeout: 264 seconds) | 
| 21:21:02 | Jehan_ | Runtime for this is dominated by sequential bigint multiplication, which is O(n^x), with x depending on your algorithm. | 
| 21:21:23 | Jehan_ | And which dwarves the O(n) cost of copying. | 
| 21:21:42 | dom96 | Nice to see Nimrod mentioned on HN by people who I have never seen here. | 
| 21:21:50 | Araq | well then where's the problem? | 
| 21:22:09 | Jehan_ | Araq: The problem is that it's not easily expressible as a par-for loop. | 
| 21:22:10 | dom96 | (or at least as far as I can remember) | 
| 21:22:38 | Araq | Jehan_: so use a standalone 'spawn' instead. we had this discussion already | 
| 21:22:50 | * | hoverbear joined #nimrod | 
| 21:23:22 | Jehan_ | Araq: I don't mean that you can't do it, just that a par for implementation isn't ideal. | 
| 21:23:52 | Jehan_ | I know can do it right now. In fact, I did it a while ago with createThread()/joinThread() even. :) | 
| 21:24:07 | * | hoverbea_ quit (Ping timeout: 248 seconds) | 
| 21:25:28 | Araq | a parfor implementation IS ideal when your problem somewhat fits it. it's *deterministic* parallelism. | 
| 21:25:58 | Jehan_ | Araq: Yup. | 
| 21:26:02 | Jehan_ | Not disagreeing. | 
| 21:27:30 | Araq | well now I still don't know if you know we not only got 'parallel' but also a standalone 'spawn'. which then returns a FlowVar | 
| 21:27:49 | Jehan_ | Umm … box(GC) expr? | 
| 21:28:01 | Jehan_ | Please tell me Rust has a way to macro this or stuff. | 
| 21:28:21 | Araq | I has, I think | 
| 21:29:00 | Jehan_ | I know it has macros, I'm just wondering if they can handle it. | 
| 21:29:07 | Araq | but it'll be discouraged because it makes code "hard to follow" and apparently it's 1970 and everybody uses VIM without language understanding plugins | 
| 21:29:53 | Jehan_ | The thing is: 99 out of 100 times I want fire-and-forget memory management. | 
| 21:30:45 | Jehan_ | But Rust assumes that it's the other way round. | 
| 21:31:00 | Araq | they got macros that can modify the grammar | 
| 21:31:08 | Araq | so it should be possible | 
| 21:31:52 | Jehan_ | Araq: Will it also interact properly with libraries? Will libraries allow for the fact that I don't give one damn about managing ownership? | 
| 21:34:11 | Araq | I don't know; but it will solve all of Haskell's problems. ;-) | 
| 21:35:33 | dom96 | Jehan_: Indeed. That is why I likely will never use Rust. IMO it will never gain Go's traction. | 
| 21:36:06 | dom96 | Because 99% of programmers want "fire-and-forget" memory management 100/100 times. | 
| 21:36:23 | Matthias247 | I think the "new" memory managemeent makes it even more complicated | 
| 21:36:32 | Matthias247 | or at least even more noisy | 
| 21:36:33 | Demos_ | when you don't want fire and forget you may as well just use a GC | 
| 21:36:37 | Jehan_ | Well, to be clear, I do think that this exactly what some application domains need. But I'm not writing a monolithic web browser, a kernel model, or AAA video games. | 
| 21:37:00 | Jehan_ | s/model/module/ | 
| 21:37:12 | fowl | nimrods gc is good enough for all those things Jehan_ :p | 
| 21:38:02 | fowl | what causes firefox to eat up so much memory? maybe it needs some vitamin GC | 
| 21:38:09 | dom96 | I also think that the risk of Mozilla giving up on Rust is very real. | 
| 21:38:09 | Jehan_ | fowl: May be tricky in some cases. Plus, the cycle collector in its current incarnation may need an upgrade. | 
| 21:38:17 | dom96 | They still call it a "research project" IIRC | 
| 21:38:47 | Matthias247 | the risk is there indpedent of how the language actually performs :) | 
| 21:39:03 | Jehan_ | dom96: No idea. For what it's worth, I am happy with Rust prospering in its chosen niche. | 
| 21:39:34 | Matthias247 | probably depends mostly on how much money the have and where it's needed the most | 
| 21:40:39 | Araq | I am not sure at all. When one *really* can't use a GC, will Rust's regions work instead? | 
| 21:41:01 | Araq | but many *think* they really can't use a GC and then Rust works fine :-) | 
| 21:41:58 | Jehan_ | What surprises me is all the people who think GC is too expensive and then have shared_ptrs littered all over their code. | 
| 21:42:25 | Demos_ | yeah that really irks me as well | 
| 21:43:00 | Matthias247 | the non-deterministic pauses are the bad thing about GC. Not the general performance | 
| 21:43:28 | Matthias247 | When I pushed my library in Android really hard the GC blocked for over 500ms each second | 
| 21:43:32 | Jehan_ | Matthias247: If that were the case, I could understand that. But quite a few people think it's performance in general. | 
| 21:43:33 | Demos_ | well the languages people associate with GC encourage a whole lot of heap allocations | 
| 21:43:52 | Araq | Matthias247: but it's a solved problem. | 
| 21:44:16 | Araq | hard realtime GCs (yes, even for multicore) exist | 
| 21:44:22 | Jehan_ | Demos_: If you mean that Java has a lot to answer for, agreed. | 
| 21:44:47 | Matthias247 | probably yes. On Desktop JVM is also did not have that problem. But on android it was really hefty | 
| 21:44:48 | Jehan_ | Araq: Yeah, but I don't want to write one. :) | 
| 21:44:52 | Demos_ | java, python, ruby, c#, go, the lot of them | 
| 21:45:11 | Demos_ | actually I think the best idea is to have GC and just call it something else | 
| 21:45:14 | Matthias247 | but android also assigns only a 32mb heap or similar. That makes it hard for the GC | 
| 21:45:37 | Jehan_ | Demos: OCaml also encourages heap allocations and doesn't really have that problem. | 
| 21:46:03 | Demos_ | Jehan_, I don't know enough about OCaml to comment on that | 
| 21:47:08 | Matthias247 | with "idomatic java" you really need an endless amount of objects. I'm also quite impressed that it performes well nevertheless | 
| 21:47:09 | Jehan_ | Demos_: Simple single-threaded generational collector. Bump allocator for the nursery, incremental collection of mature objects. Fast, soft real-time. | 
| 21:47:31 | Jehan_ | Matthias247: Escape analysis covers up a lot of Java's sins. | 
| 21:47:40 | Araq | Jehan_: I will write one. :-) | 
| 21:48:10 | Jehan_ | Araq: That'd impress me. :) | 
| 21:48:22 | Jehan_ | And I've written GCs (including concurrent GCs) before. | 
| 21:50:42 | Araq | yeah but you likely did it wrong. :P | 
| 21:50:44 | Jehan_ | Multicore incremental (soft real-time) isn't too bad. Multicore with NUMA support and hard real-time requirements …. ugh. | 
| 21:51:34 | Araq | and with that I mean you didn't pick the latest algorithms | 
| 21:51:57 | Jehan_ | Araq: Probably. | 
| 21:53:25 | Araq | Jehan_: escape analysis can't optimize ArrayList<Tuple<int, float>> | 
| 21:53:54 | Jehan_ | No, it can't. Or large matrices over complex numbers. | 
| 21:54:12 | Jehan_ | I said it covered up a lot of sins, not that it made them go away. | 
| 21:54:17 | Araq | *nobody* can optimize that afaik. | 
| 21:54:48 | Araq | not even LuaJIT :-) | 
| 21:55:10 | Jehan_ | You could, up to a certain size, but the overhead would probably be prohibitive for a JIT environment. | 
| 21:56:10 | Jehan_ | LuaJIT is impressive, but I have some lua programs where it basically gives up. :) | 
| 21:56:10 | Araq | hmm I dunno, they already don't give a fuck about JIT startup overhead | 
| 21:56:26 | Jehan_ | Tracing compilers have a hard time with highly polymorphic code. | 
| 21:56:41 | Jehan_ | Araq: They do, or it would be worse. | 
| 21:57:59 | Jehan_ | But it's one reason why I used Mono for the longest time for a compiled "batteries included" language. | 
| 21:58:26 | Jehan_ | Sustained performance wasn't up to par with the JVM, but it was good enough, and mostly no startup overhead. | 
| 21:58:46 | Jehan_ | Unless you used dynamic typing or certain other stuff. | 
| 21:59:39 | Araq | we should get Mike Pall to port his JIT over to Nimrod | 
| 21:59:59 | Jehan_ | You've got a few grand lying around? :) | 
| 22:00:23 | Jehan_ | He's a great guy, but he has to eat, too. :) | 
| 22:00:44 | Araq | "hey Mike, how about supporting a *real* language with your technology?" | 
| 22:01:06 | Araq | who can refuse such an offer? | 
| 22:02:40 | Araq | in fact, when you look at LuaJIT's FFI ... he essentially supports a JIT for C code | 
| 22:03:21 | Jehan_ | The problem is that the Lua/C interface is by necessity less efficient. | 
| 22:03:31 | Jehan_ | Since he can't optimize across language boundaries. | 
| 22:05:30 | Araq | well he surely compiles struct accesses and inlines function addresses | 
| 22:05:51 | Araq | calling C code from Lua can be faster than calling it from C | 
| 22:06:19 | Araq | because he can eliminate PIC etc. | 
| 22:06:39 | Araq | it's incredibly good. | 
| 22:07:05 | Araq | but it's still Lua+C so meh | 
| 22:13:47 | Jehan_ | The problem is that when he calls C functions, he cannot inline that code, hoist invariant parts out of a loop, etc. | 
| 22:14:06 | * | Matthias247 quit (Read error: Connection reset by peer) | 
| 22:14:08 | Jehan_ | Plus, as far as I know, it messes with tracing. | 
| 22:14:23 | Jehan_ | Lua is pretty good at what it does. | 
| 22:14:34 | Jehan_ | I.e. a small, compact, embedded language. | 
| 22:15:13 | Araq | yes, that can't be done, but you don't get that with vanilla C at all, so it is a gain | 
| 22:16:19 | Araq | yeah well ok | 
| 22:16:23 | Jehan_ | Anyhow, time for bed. Good night! | 
| 22:16:35 | Araq | but I don't want a "small" language | 
| 22:16:44 | * | Jehan_ quit (Quit: Leaving) | 
| 22:35:26 | * | Nimrod joined #nimrod | 
| 22:39:20 | * | saml_ joined #nimrod | 
| 22:49:07 | * | bstrie joined #nimrod | 
| 22:52:04 | * | io2 quit () | 
| 22:52:35 | * | boydgreenfield quit (Quit: boydgreenfield) | 
| 22:55:12 | * | boydgreenfield joined #nimrod | 
| 22:55:42 | * | OrionPK quit (Read error: Connection reset by peer) | 
| 22:55:59 | * | OrionPK joined #nimrod | 
| 23:04:21 | Varriount | Araq: Ping | 
| 23:04:52 | dom96 | 666 stargazers. Hell yeah! | 
| 23:05:25 | Varriount | Eh.. What? | 
| 23:06:29 | * | boydgreenfield quit (Quit: boydgreenfield) | 
| 23:08:20 | Araq | Varriount pong | 
| 23:09:00 | Varriount | Araq: When compiling for the javascript backend, do string literals still have null terminators prepended to them? | 
| 23:09:05 | Varriount | *appended | 
| 23:09:39 | Araq | yes | 
| 23:12:18 | * | ics quit (Ping timeout: 255 seconds) | 
| 23:15:07 | * | ics joined #nimrod | 
| 23:18:14 | Varriount | Araq: Why? | 
| 23:19:10 | Araq | nimrod's strings are zero terminated and some code makes use of this fact | 
| 23:19:44 | Araq | it doesn't create much overhead for JS because strings need to be mapped to arrays of numbers already | 
| 23:20:01 | Araq | as JS strings are immutable | 
| 23:20:29 | Araq | cstring is mapped to JS's string however | 
| 23:20:43 | Araq | so it's not all that bad | 
| 23:24:37 | Varriount | Araq: Why is it that you refer to the work required for a Lua backend as "string munging"? | 
| 23:29:27 | Araq | because generating LuaJIT's bytecode directly looks fragile | 
| 23:29:32 | * | hoverbear quit () | 
| 23:30:05 | Araq | and so I have to generate Lua code | 
| 23:30:48 | Araq | and that Lua code is a string | 
| 23:31:18 | Araq | you could generate a Lua AST and then transform that AST to a string late in the pipeline | 
| 23:31:38 | Araq | but that's usually even more work | 
| 23:36:20 | * | darkf joined #nimrod | 
| 23:49:22 | * | Jesin joined #nimrod | 
| 23:59:17 | Varriount | So, I fixed the off-by-one bug that the js backend's high() had, however I don't know why my change fixed it. | 
| 23:59:23 | Varriount | Araq: ^ |