<< 02-07-2014 >>

00:00:29*ics quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
00:00:30*io2 quit ()
00:02:57*gsingh93_ joined #nimrod
00:04:52*flaviu joined #nimrod
00:07:33*Trustable quit (Quit: Leaving)
00:33:03*Jesin quit (Ping timeout: 240 seconds)
00:47:33*Jesin joined #nimrod
00:51:44*Skrylar joined #nimrod
00:56:03*Skrylar quit (Ping timeout: 240 seconds)
01:11:31*asterite quit (Quit: Leaving.)
01:21:28*Demos joined #nimrod
01:24:09*goobles quit (Quit: Page closed)
01:27:23*saml_ joined #nimrod
01:39:20*brson_ joined #nimrod
01:40:30*brson quit (Read error: Connection reset by peer)
01:41:03*asterite joined #nimrod
02:02:30reactormonkwhich one was the last mac powerpc? https://github.com/Araq/Nimrod/issues/1193#issuecomment-47559377
02:02:37*brson_ quit (Quit: leaving)
02:03:12DemosI /think/ the little powerbook G4s
02:03:19*brson joined #nimrod
02:03:41Demosbut when apple only supports 10.9 I think it is insane to support PPC mac
02:03:50Demosit is hardly "every mac user"
02:04:03*superfunc joined #nimrod
02:12:41*Nimrod_ joined #nimrod
02:12:59*ics joined #nimrod
02:15:19*Nimrod quit (Ping timeout: 248 seconds)
02:27:56*nande quit (Read error: Connection reset by peer)
02:39:55*saml_ quit (Quit: Leaving)
02:45:27*brson quit (Quit: leaving)
02:50:17*Jesin quit (Ping timeout: 252 seconds)
02:52:04*Jesin joined #nimrod
03:23:56*johnsoft quit (Ping timeout: 260 seconds)
03:28:21*superfunc quit (Ping timeout: 272 seconds)
03:33:00flaviureactormonk, Demos: I replied to that fwi, with essentially the same arguments you gave
03:35:30Demosyeah
03:37:12*xtagon joined #nimrod
04:03:36*johnsoft joined #nimrod
04:08:48*Skrylar joined #nimrod
04:16:44*Demos_ joined #nimrod
04:20:07*Demos quit (Ping timeout: 248 seconds)
04:26:25*ARCADIVS quit (Quit: WeeChat 0.4.3)
04:33:11*asterite quit (Quit: Leaving.)
04:54:16*kshlm joined #nimrod
04:55:57*ics quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
05:06:22*flaviu quit (Ping timeout: 264 seconds)
05:33:32*xtagon quit (Read error: Connection reset by peer)
05:35:04*ics joined #nimrod
05:41:46*Nimrod_ quit (Ping timeout: 264 seconds)
05:47:55*hoverbear quit ()
05:56:46*Jesin quit (Ping timeout: 264 seconds)
05:58:03*boydgreenfield joined #nimrod
06:38:44*Jesin joined #nimrod
07:14:28*io2 joined #nimrod
07:26:42*zahary quit (Quit: Leaving.)
07:29:25*boydgreenfield quit (Quit: boydgreenfield)
07:48:41*ics quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
07:50:40*io2 quit ()
07:55:07*Trustable joined #nimrod
08:01:35*BitPuffin quit (Ping timeout: 252 seconds)
08:05:14*goobles joined #nimrod
08:05:20*kunev joined #nimrod
08:14:20*kemet joined #nimrod
08:14:27*kemet quit (Remote host closed the connection)
08:14:50*kemet joined #nimrod
08:19:11*kemet quit (Remote host closed the connection)
08:35:24*Demos_ quit (Read error: Connection reset by peer)
08:39:26*BitPuffin joined #nimrod
10:43:16*q66 joined #nimrod
11:05:14*io2 joined #nimrod
11:22:57*saml_ joined #nimrod
11:37:18*kunev quit (Read error: Connection reset by peer)
11:38:17*kunev joined #nimrod
11:41:10*Skrylar quit (Ping timeout: 264 seconds)
11:41:21*Skrylar joined #nimrod
12:13:31*saml_ quit (Quit: Leaving)
12:19:05*kunev quit (Quit: leaving)
12:20:18*boDil_ joined #nimrod
12:21:04*kunev joined #nimrod
12:26:06*untitaker quit (Ping timeout: 255 seconds)
12:27:36*io2 quit (Ping timeout: 260 seconds)
12:30:55*untitaker joined #nimrod
12:56:36*BitPuffin quit (Quit: See you on the dark side of the moon!)
13:04:30boDil_http://www.drdobbs.com/architecture-and-design/the-best-of-the-first-half/240168580
13:04:33boDil_Good job, Araq
13:13:33*kshlm quit (Ping timeout: 240 seconds)
13:26:42*io2 joined #nimrod
13:27:29*boDil_ quit (Quit: Page closed)
13:29:32reactormonkAnyone got a macbook here?
13:42:48*OrionPK quit (Read error: Connection reset by peer)
13:43:05*OrionPK joined #nimrod
13:57:11*johnsoft quit (Ping timeout: 248 seconds)
13:58:01*kshlm joined #nimrod
13:58:59*kshlm quit (Client Quit)
14:00:36*johnsoft joined #nimrod
14:00:39*johnsoft quit (Read error: Connection reset by peer)
14:02:40OrionPKyeah i have a macbook
14:07:21*darkf quit (Quit: Leaving)
14:07:33def-is it always necessary to manually close a file?
14:12:48*BitPuffin joined #nimrod
14:14:36*hoverbear joined #nimrod
14:26:59*johnsoft joined #nimrod
14:32:20flyxreactormonk: I have one too
14:59:30*asterite joined #nimrod
15:01:15*nande joined #nimrod
15:02:02*hoverbear quit ()
15:09:02*hoverbear joined #nimrod
15:09:39*hoverbear quit (Client Quit)
15:09:56reactormonkcould you test the corresponding bug?
15:13:09*kshlm joined #nimrod
15:16:11flyxreactormonk: #1193? you'd need an iBook or PowerBook for that, MacBooks already have Intel
15:16:53flyxreactormonk: anyway, I have a PowerBook lying around, I'll see if I still boots ;)
15:34:15*asterite left #nimrod (#nimrod)
15:34:49reactormonkflyx, nah, it's apparently mac too
15:36:53*Jesin quit (Quit: Leaving)
15:37:02flyxah okay, trying to reproduce it here
15:40:11flyxreactormonk: aye, the build of koch fails with the error quoted in the issue (iMac, OSX 10.9)
15:52:49*boydgreenfield joined #nimrod
15:54:26*gkoller joined #nimrod
15:55:20*kunev quit (Quit: leaving)
16:09:44Skrylardef-: its usually a good idea to close files early, even if gcs/destructors do it for you
16:10:05Skrylardue to file locks / write caches and whatnots
16:11:35*gkoller quit (Ping timeout: 248 seconds)
16:24:02*ics joined #nimrod
16:27:11*Puffin joined #nimrod
16:29:03*BitPuffin quit (Ping timeout: 240 seconds)
16:36:15flyxso are there any jobs for Nimrod coders yet? I could do with one ^^
16:39:00*kunev joined #nimrod
16:43:08*Demos joined #nimrod
17:00:42def-Skrylar: i was hoping that files get closed when they go out of scope, or maybe python's "with"
17:00:48*Demos_ joined #nimrod
17:01:06def-but from my experiments that's not happening
17:04:11def-i can add a destructor that calls close, which seems to work. hm
17:04:31Skrylarwell TFile is just a thin wrapper over FILE*
17:04:34*Demos quit (Ping timeout: 264 seconds)
17:04:56Skrylardef-: however its possible to do that, using destructors, or use a template that closes the handle
17:05:30def-Skrylar: I'm also wondering whether that should be the default
17:05:56Skrylardef-: i donno. destructors are wiggly
17:06:07Skrylarwhat if i want to pass a file handle to another struct?
17:06:23Skrylarthe destructor runs for my local one, closes the file, now the remote handle is invalid
17:06:32*hoverbear joined #nimrod
17:09:22*_dLog is now known as dLog
17:10:20AraqSkrylar: which is why I plan to requires a particular form of escape analysis for destructors
17:10:42Araqdestructors make no sense without escape analysis IMHO
17:11:16Araqyou end up with shitty "compiler can elide copy assignments" rules otherwise
17:11:42Skrylarshitty compilers you say
17:11:55Skrylarclearly you just need to write a FORTH compiler and base nimrod on top of that
17:11:56*Skrylar ducks
17:12:45Skrylaractually i think i've seen a few of those and they aren't bad; though they have a lot of stack twaddling to them
17:13:13def-oh, and "with" is a keyword in Nimrod, but is it actually used?
17:14:14*boydgreenfield quit (Quit: boydgreenfield)
17:17:45*icebattle joined #nimrod
17:18:34*boydgreenfield joined #nimrod
17:21:01Araqdef-: kind of
17:21:27Araqtype foo = distinct int with ... is already parsed iirc
17:21:58*Matthias247 joined #nimrod
17:22:02def-what does it do, Araq?
17:22:44Araqtype foo = distinct int with `.`, `==`
17:22:59def-ah, that's cool
17:23:18def-no more 20 borrows
17:24:43Araqalso 'with' was planned as a replacement for .push
17:24:56Araqwith overflowChecks=on:
17:25:01Araq echo a+ b
17:25:20Araqdunno if we'll do it though
17:25:43Araq.push has the advantage that it doesn't lead to excessive nesting
17:37:42*brson joined #nimrod
17:41:52Demos_can you say distinct int with *
17:42:01Demos_or maybe with all
17:43:53Araqtype foo = int # yay, now works "with all"
17:44:17Demos_but there is an implicit conversion then
17:44:23Araq"with all" makes no sense
17:44:35Araqdollar*dollar != dollar
17:44:48Demos_yeah I spose
17:44:52AraqTaintedString has similar typing rules
17:45:44Araqthere is no "implicit conversion" either, it's simply a type alias
17:57:11*hoverbea_ joined #nimrod
18:00:54*hoverbear quit (Ping timeout: 255 seconds)
18:05:46*Puffin quit (Quit: See you on the dark side of the moon!)
18:06:03*BitPuffin joined #nimrod
18:14:18dom96hi]
18:46:42BlameStrossI'm about to hack together a light-weight graph library. Is there any work done on variable-at-runtime-sized matrices?
18:47:20BlameStrossor do I get to hack that together myself?
18:47:31dom96BlameStross: What about bignum?
18:47:55BlameStrossIt is working minus division. Right now I am working on the stuff I am hypothetically paid for
18:48:44BlameStrossBigNum was for fun. I am writing a network topology simulation and analysis tool for research data for an academic paper.
18:49:11BlameStrosstis a one-off. I've already written it in python but I need it to go faster.
18:49:19AraqBlameStross: I had a graph library based on sparse integer bit sets (comparable to TIntSet), but lost the code
18:49:55Araqbut TIntSet might be good enough already, depending on what you need to do with the graphs
18:50:54BlameStrossAraq: I'm not doing anything that crazy, just add-node, add-edge, find inter-node distances.
18:51:38Araqproc addEdge(a, b: int) = s.incl(a*N + b) # where N is the number of nodes in the graph
18:52:22BlameStrossthe way I am familiar with on how to implement this is as 2 half-matrices of edges, one as transpose.
18:53:38BlameStrossAraq: That lets me store the presence of an edge. I'd need another structure to store edge weights anyway.
18:53:52Araqwell that was my question
18:55:16flyxis there anything I can do to help fix #903?
18:55:24flyxthat's kind of blocking me right now
18:55:29BlameStrossHonestly I am going to go and make something needlessly ugly out of sequences because I think it will code up faster
19:01:04*_blank_ joined #nimrod
19:01:17*superfunc joined #nimrod
19:01:23superfuncsup everyone
19:02:09Demos_BlameStross, for runtime sized matrices store the width and height and then a seq of width*height length
19:02:23Araqflyx: I'm working on these VM bugs right now
19:02:31flyx\o/
19:02:48*flyx fetches some Coffee for Araq
19:02:59*_blank_ left #nimrod (#nimrod)
19:03:08BlameStrossDemos_: yeah, that is the plan right now. I might do something a little more complex because I am only doing regular graphs not digraphs so I only need half of a matrix
19:04:53Araqflyx: however, usually global .compileTime variables suggest your macro doesn't capture enough
19:04:59Araqmymacro:
19:05:05Araq # .. whole module here
19:05:25Araqis possible too
19:05:37*_blank_ joined #nimrod
19:06:17*_blank_ quit (Client Quit)
19:06:21BlameStrossDemos_: the fun bit is changing the matrix size without clobbering old edges
19:06:35flyxAraq: that's what I'm doing in emerald. but I need some cache across macro calls for complex stuff like template inheritance
19:06:36*_blank_ joined #nimrod
19:06:53Demos_what is emerald?
19:07:11flyxDemos_: https://github.com/flyx/emerald
19:07:16*_blank_ quit (Client Quit)
19:07:26Demos_neato
19:07:55Demos_if it makes you feel better I use macros to generate honest-to-god runtime global variables :D
19:08:16*Jehan_ joined #nimrod
19:08:33flyx^^
19:09:45*Skrylar takes Demos_'s coding privileges and puts them in tupperware for using globals
19:09:56Demos_I do not access them directly in my code
19:10:10Demos_like you need a handle to access them
19:10:20Demos_the idea came to me in a dreem
19:10:30*_blank_ joined #nimrod
19:10:32Demos_s/dreem/dream/
19:11:10flyxone problem I discovered is that the user can accidentally hit the name of any variable you introduce in a macro
19:11:29Araqgensym?
19:11:44AraqbindSym?
19:12:18*_blank_ quit (Client Quit)
19:12:29flyxAraq: ah, didn't know about that, thanks
19:12:44*flyx goes updating his code
19:12:58Skrylarsweeeeet
19:13:09Skrylaryou can tell vim to repeat things on lines
19:13:24*Skrylar just read about using ranges to tell it to run macros
19:13:52fowlBlameStross, did yyou see this https://github.com/blamestross/nimrod-vectors/issues/1
19:18:08Demos_static[int] does "work" in types, but it screws up overload resolution
19:18:23Demos_vectors of different sizes do not like to be considered seperate for OR
19:18:42fowlwell its not usable there yet
19:18:43flyxI don't seem to be able to use bindSym to bind to an identifier I created with genSym
19:19:25BlameStrossfowl: yep. I made some changes based on it (mostly adding proc `[]=`)
19:21:08Araqflyx: nor should you
19:21:22Araqyou use gensym and then that's it
19:21:28BlameStrossfowl: While it would succeed in allowing type checking work for Vector dimensions. It is neither intuitive nor simple. I think it would be better just to wait for static[int] to work.
19:23:22BlameStrossAt least for the work I am doing with this library, I am never mixing dimension count in the same program. Even if I do mess that up, it will be caught fast at runtime and have a useful error.
19:24:47flyxAraq: hm. so I just copy the returned PNimrodNode everywhere?
19:25:10Araqflyx: yes
19:25:27Araqyou don't even have to copy it (I hope)
19:26:03flyxah, okay. yes, that should be usable. and it's guaranteed to work even if I use it in immediate macros where the identifiers are not resolved yet?
19:26:30Araqshould work, yes
19:26:41flyxgreat
19:26:42fowlBlameStross, whats not intuitive
19:28:09BlameStrossso, to make a vector in R*k space, I need to define a custom ordinal type which is valid for the the range 0..k-1 then use that type as the generic argument for every vector
19:28:47fowlBlameStross, where in my example do you see generic arguments
19:29:52*goobles quit (Ping timeout: 246 seconds)
19:30:32fowlnowhere, because you see vec[0], not vec[0..1][0]
19:32:01BlameStrossno but you have:
19:32:01BlameStrossVector[0..20] = VecA+VecB
19:32:31*nande quit (Read error: Connection reset by peer)
19:32:45BlameStrosswell, var vec : Vector[0.20] = VecA+VecB
19:33:15fowlBlameStross, a and b are vectors of 21/
19:33:16fowl?
19:33:28*boydgreenfield quit (Quit: boydgreenfield)
19:33:39BlameStrossfowl: sure
19:34:01fowlvar vec = veca+vecb
19:34:40BlameStrossI thought you had to declare type in the var statement
19:35:00fowlBlameStross, well i do it in my example "var v = initVector(2)"
19:35:20Araqomg, where did you get that idea, BlameStross ?
19:35:28BlameStrossthe tutorial?
19:35:35BlameStrossthat I skimmed
19:35:43BlameStross3-4 days ago
19:36:22BlameStrossand there it is in examples 2/3 of the way down
19:36:22AraqSince the compiler knows that
19:36:24BlameStrosshuh
19:36:24Araq``readLine`` returns a string, you can leave out the type in the declaration
19:36:25Araq(this is called `local type inference`:idx:). So this will work too:
19:36:27Araq.. code-block:: Nimrod
19:36:28Araq var name = readLine(stdin)
19:36:41BlameStrosswell, that simplifies a lot of code
19:37:15Araqwell most examples lack the type annotation in tutorial 1
19:37:35Araqyour skimming capabilities really suck ;-)
19:37:40BlameStrossfowl: you proposal is now reasonable. I thought I'd have to be hauling around the complex type-def for every variable declaration only.
19:38:02BlameStrossAraq: they do. I just start writing and reading compile errors
19:38:16BlameStrosswhich I will note are actually pretty good
19:39:15*nande joined #nimrod
19:39:23Araqok ... I guess everybody learns differently
19:39:26fowlBlameStross, nah you will only need the vector's range in type defs/func params, though you probably want to define Vector2, Vector3, etc right away
19:41:18*Demos_ quit (Read error: Connection reset by peer)
19:41:39BlameStrosslol. Essentially I really on learn by doing. So I just start doing and survive the inital horrible resistance.
19:41:48BlameStrosss/on/only
19:42:21BlameStrossthe path to understanding is paved with stupid mistakes.
19:45:15BlameStrossI'll have some free-time to re-write tomorrow.
19:46:33*boydgreenfield joined #nimrod
19:47:57*Demos joined #nimrod
19:48:24*Demos_ joined #nimrod
19:48:26*superfunc quit (Quit: leaving)
19:50:33*nande quit (Remote host closed the connection)
19:53:11*Demos quit (Ping timeout: 272 seconds)
19:56:52BlameStrossfowl: proc initVector* (len:static[int]): auto =
19:57:09BlameStrosshow does type "auto" work here?
19:59:03*Jehan_ quit (Remote host closed the connection)
19:59:07reactormonkBlameStross, short version of "the compiler can figure that out for me"
19:59:12*nande joined #nimrod
20:00:05BlameStrossSo If I instantiate 2d 3d and 22d vectors, it re-implements the proc for each type?
20:00:11BlameStrossat compile time
20:04:28*Varriount|Mobile joined #nimrod
20:04:57*Jehan_ joined #nimrod
20:09:42reactormonkBlameStross, probably
20:10:25boydgreenfieldthreadpool question: How can I limit the number of threads that get spawned if I want to iterate through a loop of, e.g., input files? `setMaxPoolSize(2)` doesn’t appear to work / do anything for me
20:10:41AraqBlameStross: yes but with a bit of luck the linker merges them ... *cough*
20:11:37*dLog is now known as spike021
20:12:07*spike021 is now known as dLog
20:13:41Araqboydgreenfield: how many cores do you have?
20:14:00boydgreenfieldAraq: 4 (8 with hyperthreading)
20:15:15boydgreenfieldAraq: Interesting — it uses up to 8, but then stops unless I said setMaxPoolSize(>8)
20:15:29Araqwell yes
20:15:48Araqit creates 8 threads in setup() already
20:15:53Araqand then uses these
20:16:00boydgreenfieldAraq: Wait nevermind, it just uses up to 8 regardless. Perhaps I’ve missed it – how should I be changing the max and min threads?
20:16:10Araqyou shouldn't
20:16:37Araqit's adapts to your current CPU load
20:16:47boydgreenfieldAraq: So there’s no way to use less than all my cores in a pattern like `for input_file in input_files: spawn process_file(input_file)`
20:17:02Araqyeah. for now.
20:17:19Araqwe can make it respect maxPoolSize properly
20:17:38Araqit didn't occur to me that you like to throttle your CPU :P
20:17:40boydgreenfieldAraq: Got it. I guess I just kind of assumed setMaxPoolSize was being used and didn’t look further.
20:17:52Araqwell it is used
20:18:09Araqbut only to determine if it's allowed to create *more* threads ;-)
20:18:23boydgreenfieldAraq: More than the number of cores?
20:18:30Araqyep
20:19:36boydgreenfieldAraq: What’s the syntax for that? I’m hitting a hard limit at 8 (cores w/ hyperthreading), using setMaxPoolSize(16) and then for i in 0.. <16: spawn X does two sets of 8, for example.
20:19:54*nande quit (Read error: Connection reset by peer)
20:20:06boydgreenfieldAraq: Note that this probably isn’t a problem for my use case… just trying to understand what controls I *do* have available
20:20:37Araqwell *max* pool size still means the thread pool can do what it wants within this limit
20:21:11boydgreenfieldAraq: Actually, no it is a problem now that I think about it since I’m memory bound and may only want to use, e.g., two threads w/ 10GB or RAM each (on a machine w/ 16 cores)
20:21:28boydgreenfieldCan the user set the max?
20:23:13boydgreenfield(Ah now looking through, it appears not)
20:23:32Araqwell please make a PR
20:23:45AraqI'm not sure what you really need
20:24:03Araqlooks like people need to call setup() on their own then
20:24:21boydgreenfieldWill do – ya I’m forking now for that functionality.
20:24:27Araqor we go the Go way and have some shitty environment variable
20:24:36Araqnah ...
20:25:59boydgreenfieldThe current syntax is elegant — but some additional control might be warranted for certain use cases (then again, maybe one should just be using a finer-grained control in that case vs. spawn)
20:26:04boydgreenfieldThanks for the help.
20:26:21*nande joined #nimrod
20:26:41Araqwell channels + createThread still work and can be faster
20:27:00Araqespecially since you already know you want exactly 2 threads
20:30:03*nande quit (Read error: Connection reset by peer)
20:30:15boydgreenfieldAraq: I’ll take a look there. I’m kind of fumbling around as I didn’t have a good sense of what concurrency options to start with and where the design was headed.
20:33:17boydgreenfieldIn the meantime, just writing a setup(min_threads, max_threads) proc and calling it manually works well.
20:33:24*hoverbea_ quit (Ping timeout: 260 seconds)
20:34:20*MayurYa joined #nimrod
20:34:20*MayurYa quit (Changing host)
20:34:20*MayurYa joined #nimrod
20:34:23AraqJehan_: I still don't know how your withHeap really works
20:34:46Jehan_Araq: In what regard?
20:34:56AraqIF withHeap does the locking, that's so coarse that it can't parallelize anything
20:35:30Araqbut if it doesn't do that we need additional checks in newObj
20:35:55Jehan_First, as I said, it's a pretty barebones model.
20:36:29*nande joined #nimrod
20:36:42Jehan_Second, without compiler support (I think, effects *might* work), you can't do readonly access.
20:37:09Jehan_Third, without compiler support, it's difficult to optimize some important cases.
20:37:24Jehan_That said, yes, you can parallelize quite a bit.
20:37:53Jehan_As with any form of locking, it's crucial to not hold the lock for long.
20:38:32Araqwell compiler support is a non-issue
20:38:41Jehan_You also want multiple shared heaps for such a setup, so that there's very little blocking.
20:39:05Jehan_Araq: I wasn't proposing this as a long-term solution, but as a relatively quick and easy hack.
20:39:36Jehan_If you want something long-term, it needs to be surrounded by quite a bit more language support, and then it stops being quick and easy.
20:39:43Araqwell spawn+FlowVar already enjoys compiler support
20:40:41Araqinterestingly it is easier than lambda lifting. but then everything is.
20:42:24Jehan_The most important optimizations would be (1) to facilitate read-only access, i.e. guaranteeing that a section of code doesn't modify the heap and (2) optimized deep-copying for transfering data between heaps for common use cases (int, float, string parameters in particular, as well as small tuples of these).
20:42:41Jehan_You'd also want language support to make the basic model more expressive.
20:42:54Jehan_See, e.g., what Eiffel does with SCOOP.
20:43:08AraqJehan_: can you outline some usages with your withHeap?
20:43:26AraqI still don't see how it'll be used in practice and produce a speedup
20:44:40Jehan_Sure. For example, a shared hash table. Take N heaps, map buckets randomly to heaps.
20:45:17Jehan_Well, pseudo-randomly, you want the mapping to be reproducible or you'll never find stuff again. :)
20:46:58*hoverbear joined #nimrod
20:47:43*nande quit (Remote host closed the connection)
20:49:26Araqwell thinking about it ... that's unsafe like hell
20:49:37Jehan_Why would it be unsafe?
20:50:06Jehan_Or rather, unsafe can mean a lot of things. What kind of unsafe do you mean?
20:50:25Araqyou transform allocShared into 'new' with this withHeap primitive
20:50:40Jehan_No.
20:50:53Jehan_I don't. I'm using the old new.
20:50:57Araqyes you do
20:51:01Araqnot literally
20:51:07Jehan_I'm temporarily switching out the thread-local heap.
20:51:14Araqexactly
20:51:20Jehan_So that all new requests go to the shared heap.
20:51:34Jehan_the withHeap() interface prevents any points to the shared heap from escaping.
20:51:34Araqand the types remain 'ref', 'string', 'seq'
20:51:46Jehan_Because all data that goes back and forth HAS to be serialized/deserialized.
20:51:52Araqnew(node)
20:51:55Jehan_pointers*
20:52:04AraqwithHeap(...):
20:52:13Araq new(foreign)
20:52:17Jehan_Yes?
20:52:23Araq foreign.next = node #oops
20:52:27Jehan_Can't.
20:52:36Jehan_withHeap calls a procvar.
20:52:48Araqah, ok
20:52:56Jehan_I was very specific there.
20:52:59Jehan_:)
20:53:07Jehan_Also, again why I called it pretty barebones.
20:53:15Jehan_You CAN do it as described above.
20:53:34Jehan_If you are careful with not capturing the environment. needs compiler support, though.
20:53:41Araqin fact
20:54:01*nande joined #nimrod
20:54:02Jehan_You can do with a closure that does capture-copying, though.
20:54:12Araqwe already have a write barrier for 'foreign.next = node'
20:54:44Araqwe could an assert that the pointers must come from the same heap
20:54:50Araq*could add
20:54:59Jehan_But that's what I meant by needing optimization above in that the proposed quick-and-easy stuff simply serializes all data passing back and forth.
20:55:17Jehan_THAT can be inefficient as hell if you do it a lot for simple types.
20:55:49Jehan_But with compiler support, you can optimize a lot of common cases (ints, floats, strings, and composite types of simple types).
20:56:13Araqyeah an optimized 'deepCopy' is in the works already
20:56:18Jehan_Another concern is the per-heap overhead.
20:57:53Jehan_I've been working on compiler optimization for reference counting (as an alternative to deferred RC) myself, because that can co-exist with stuff like tcmalloc or other implementations with a single physical heap as long as the "logical" heaps don't get mixed up.
20:58:48Araqdoing better than deferred RC is very very hard
20:59:06Araqdoing it statically, I mean
20:59:12Jehan_I know.
20:59:39Jehan_The next step would be to be able to mix and match.
21:00:29*nande quit (Remote host closed the connection)
21:01:07*MayurYa quit (Quit: *nix server crashed w/ bof)
21:01:20Araqbtw the planned way to do the "shared" hash table is:
21:01:24Araqparallel:
21:01:39Araq for i in 0.. <N:
21:02:17Araq output[i] = spawn processPartialData(input[i])
21:02:35Araqmerge(output[0.. <N])
21:03:11*boydgreenfield quit (Quit: boydgreenfield)
21:03:19Jehan_Inefficient or not good enough for a lot of important problems.
21:03:55Araqpff you can say that to every solution :P
21:04:21Jehan_The problem is that very frequently (many parallel graph search algorithms in particular) you need all threads to be able to access ALL the partial hash tables.
21:04:35Jehan_Or you're going to do a lot of duplicate work.
21:04:52Jehan_What you propose is fine for functional type stuff.
21:05:10Jehan_But not for where the shared hash table is basically one big data accumulator.
21:05:40def-I solved a few Rosetta Code tasks in Nimrod today, if anyone wants to take a look or has suggestions: http://rosettacode.org/wiki/Special:Contributions/Def
21:06:48*nande joined #nimrod
21:08:33*kunev quit (Ping timeout: 240 seconds)
21:12:13Jehan_"parallel:" is great for stuff that is basically about vectorization. But it's not so great for divide-and-conquer, traversal of irregular spaces, or other things that can't naturally be subdivided in subproblems of roughly equal size.
21:15:35AraqI think it's good enough for divide-and-conquer too
21:16:25Araqgraph traversal is hard but we keep forgetting about cast[foreign ptr T](myref)
21:16:40Jehan_How would you implement factorial(1000000)?
21:17:03*boydgreenfield joined #nimrod
21:17:19Araqluckily I don't have to care about factorial(10...) :-)
21:17:43Jehan_Heh. It's a simple example of divide and conquer.
21:17:43Araqdivide-and-conquer is 'sort' for me ;-)
21:19:05Jehan_factorial(n) = product(1..n) with product(m..n) = product(m, (n+m) div 2) * product((n+m) div 2 + 1, n) (stop recursing for n-m being sufficiently small).
21:19:29Araqthat's not the problem here
21:19:37Araqthe real problem is memory management
21:19:52*hoverbea_ joined #nimrod
21:19:59Araqfor 'fac' (bignums) you get the problems you're thinking of
21:20:08Jehan_Not a problem, you can copy arguments and results between thread-local heaps.
21:20:23Araqfor 'sort' you don't need memory management (swap) and so it's fine
21:20:46*hoverbear quit (Ping timeout: 264 seconds)
21:21:02Jehan_Runtime for this is dominated by sequential bigint multiplication, which is O(n^x), with x depending on your algorithm.
21:21:23Jehan_And which dwarves the O(n) cost of copying.
21:21:42dom96Nice to see Nimrod mentioned on HN by people who I have never seen here.
21:21:50Araqwell then where's the problem?
21:22:09Jehan_Araq: The problem is that it's not easily expressible as a par-for loop.
21:22:10dom96(or at least as far as I can remember)
21:22:38AraqJehan_: so use a standalone 'spawn' instead. we had this discussion already
21:22:50*hoverbear joined #nimrod
21:23:22Jehan_Araq: I don't mean that you can't do it, just that a par for implementation isn't ideal.
21:23:52Jehan_I know can do it right now. In fact, I did it a while ago with createThread()/joinThread() even. :)
21:24:07*hoverbea_ quit (Ping timeout: 248 seconds)
21:25:28Araqa parfor implementation IS ideal when your problem somewhat fits it. it's *deterministic* parallelism.
21:25:58Jehan_Araq: Yup.
21:26:02Jehan_Not disagreeing.
21:27:30Araqwell now I still don't know if you know we not only got 'parallel' but also a standalone 'spawn'. which then returns a FlowVar
21:27:49Jehan_Umm … box(GC) expr?
21:28:01Jehan_Please tell me Rust has a way to macro this or stuff.
21:28:21AraqI has, I think
21:29:00Jehan_I know it has macros, I'm just wondering if they can handle it.
21:29:07Araqbut it'll be discouraged because it makes code "hard to follow" and apparently it's 1970 and everybody uses VIM without language understanding plugins
21:29:53Jehan_The thing is: 99 out of 100 times I want fire-and-forget memory management.
21:30:45Jehan_But Rust assumes that it's the other way round.
21:31:00Araqthey got macros that can modify the grammar
21:31:08Araqso it should be possible
21:31:52Jehan_Araq: Will it also interact properly with libraries? Will libraries allow for the fact that I don't give one damn about managing ownership?
21:34:11AraqI don't know; but it will solve all of Haskell's problems. ;-)
21:35:33dom96Jehan_: Indeed. That is why I likely will never use Rust. IMO it will never gain Go's traction.
21:36:06dom96Because 99% of programmers want "fire-and-forget" memory management 100/100 times.
21:36:23Matthias247I think the "new" memory managemeent makes it even more complicated
21:36:32Matthias247or at least even more noisy
21:36:33Demos_when you don't want fire and forget you may as well just use a GC
21:36:37Jehan_Well, to be clear, I do think that this exactly what some application domains need. But I'm not writing a monolithic web browser, a kernel model, or AAA video games.
21:37:00Jehan_s/model/module/
21:37:12fowlnimrods gc is good enough for all those things Jehan_ :p
21:38:02fowlwhat causes firefox to eat up so much memory? maybe it needs some vitamin GC
21:38:09dom96I also think that the risk of Mozilla giving up on Rust is very real.
21:38:09Jehan_fowl: May be tricky in some cases. Plus, the cycle collector in its current incarnation may need an upgrade.
21:38:17dom96They still call it a "research project" IIRC
21:38:47Matthias247the risk is there indpedent of how the language actually performs :)
21:39:03Jehan_dom96: No idea. For what it's worth, I am happy with Rust prospering in its chosen niche.
21:39:34Matthias247probably depends mostly on how much money the have and where it's needed the most
21:40:39AraqI am not sure at all. When one *really* can't use a GC, will Rust's regions work instead?
21:41:01Araqbut many *think* they really can't use a GC and then Rust works fine :-)
21:41:58Jehan_What surprises me is all the people who think GC is too expensive and then have shared_ptrs littered all over their code.
21:42:25Demos_yeah that really irks me as well
21:43:00Matthias247the non-deterministic pauses are the bad thing about GC. Not the general performance
21:43:28Matthias247When I pushed my library in Android really hard the GC blocked for over 500ms each second
21:43:32Jehan_Matthias247: If that were the case, I could understand that. But quite a few people think it's performance in general.
21:43:33Demos_well the languages people associate with GC encourage a whole lot of heap allocations
21:43:52AraqMatthias247: but it's a solved problem.
21:44:16Araqhard realtime GCs (yes, even for multicore) exist
21:44:22Jehan_Demos_: If you mean that Java has a lot to answer for, agreed.
21:44:47Matthias247probably yes. On Desktop JVM is also did not have that problem. But on android it was really hefty
21:44:48Jehan_Araq: Yeah, but I don't want to write one. :)
21:44:52Demos_java, python, ruby, c#, go, the lot of them
21:45:11Demos_actually I think the best idea is to have GC and just call it something else
21:45:14Matthias247but android also assigns only a 32mb heap or similar. That makes it hard for the GC
21:45:37Jehan_Demos: OCaml also encourages heap allocations and doesn't really have that problem.
21:46:03Demos_Jehan_, I don't know enough about OCaml to comment on that
21:47:08Matthias247with "idomatic java" you really need an endless amount of objects. I'm also quite impressed that it performes well nevertheless
21:47:09Jehan_Demos_: Simple single-threaded generational collector. Bump allocator for the nursery, incremental collection of mature objects. Fast, soft real-time.
21:47:31Jehan_Matthias247: Escape analysis covers up a lot of Java's sins.
21:47:40AraqJehan_: I will write one. :-)
21:48:10Jehan_Araq: That'd impress me. :)
21:48:22Jehan_And I've written GCs (including concurrent GCs) before.
21:50:42Araqyeah but you likely did it wrong. :P
21:50:44Jehan_Multicore incremental (soft real-time) isn't too bad. Multicore with NUMA support and hard real-time requirements …. ugh.
21:51:34Araqand with that I mean you didn't pick the latest algorithms
21:51:57Jehan_Araq: Probably.
21:53:25AraqJehan_: escape analysis can't optimize ArrayList<Tuple<int, float>>
21:53:54Jehan_No, it can't. Or large matrices over complex numbers.
21:54:12Jehan_I said it covered up a lot of sins, not that it made them go away.
21:54:17Araq*nobody* can optimize that afaik.
21:54:48Araqnot even LuaJIT :-)
21:55:10Jehan_You could, up to a certain size, but the overhead would probably be prohibitive for a JIT environment.
21:56:10Jehan_LuaJIT is impressive, but I have some lua programs where it basically gives up. :)
21:56:10Araqhmm I dunno, they already don't give a fuck about JIT startup overhead
21:56:26Jehan_Tracing compilers have a hard time with highly polymorphic code.
21:56:41Jehan_Araq: They do, or it would be worse.
21:57:59Jehan_But it's one reason why I used Mono for the longest time for a compiled "batteries included" language.
21:58:26Jehan_Sustained performance wasn't up to par with the JVM, but it was good enough, and mostly no startup overhead.
21:58:46Jehan_Unless you used dynamic typing or certain other stuff.
21:59:39Araqwe should get Mike Pall to port his JIT over to Nimrod
21:59:59Jehan_You've got a few grand lying around? :)
22:00:23Jehan_He's a great guy, but he has to eat, too. :)
22:00:44Araq"hey Mike, how about supporting a *real* language with your technology?"
22:01:06Araqwho can refuse such an offer?
22:02:40Araqin fact, when you look at LuaJIT's FFI ... he essentially supports a JIT for C code
22:03:21Jehan_The problem is that the Lua/C interface is by necessity less efficient.
22:03:31Jehan_Since he can't optimize across language boundaries.
22:05:30Araqwell he surely compiles struct accesses and inlines function addresses
22:05:51Araqcalling C code from Lua can be faster than calling it from C
22:06:19Araqbecause he can eliminate PIC etc.
22:06:39Araqit's incredibly good.
22:07:05Araqbut it's still Lua+C so meh
22:13:47Jehan_The problem is that when he calls C functions, he cannot inline that code, hoist invariant parts out of a loop, etc.
22:14:06*Matthias247 quit (Read error: Connection reset by peer)
22:14:08Jehan_Plus, as far as I know, it messes with tracing.
22:14:23Jehan_Lua is pretty good at what it does.
22:14:34Jehan_I.e. a small, compact, embedded language.
22:15:13Araqyes, that can't be done, but you don't get that with vanilla C at all, so it is a gain
22:16:19Araqyeah well ok
22:16:23Jehan_Anyhow, time for bed. Good night!
22:16:35Araqbut I don't want a "small" language
22:16:44*Jehan_ quit (Quit: Leaving)
22:35:26*Nimrod joined #nimrod
22:39:20*saml_ joined #nimrod
22:49:07*bstrie joined #nimrod
22:52:04*io2 quit ()
22:52:35*boydgreenfield quit (Quit: boydgreenfield)
22:55:12*boydgreenfield joined #nimrod
22:55:42*OrionPK quit (Read error: Connection reset by peer)
22:55:59*OrionPK joined #nimrod
23:04:21VarriountAraq: Ping
23:04:52dom96666 stargazers. Hell yeah!
23:05:25VarriountEh.. What?
23:06:29*boydgreenfield quit (Quit: boydgreenfield)
23:08:20AraqVarriount pong
23:09:00VarriountAraq: When compiling for the javascript backend, do string literals still have null terminators prepended to them?
23:09:05Varriount*appended
23:09:39Araqyes
23:12:18*ics quit (Ping timeout: 255 seconds)
23:15:07*ics joined #nimrod
23:18:14VarriountAraq: Why?
23:19:10Araqnimrod's strings are zero terminated and some code makes use of this fact
23:19:44Araqit doesn't create much overhead for JS because strings need to be mapped to arrays of numbers already
23:20:01Araqas JS strings are immutable
23:20:29Araqcstring is mapped to JS's string however
23:20:43Araqso it's not all that bad
23:24:37VarriountAraq: Why is it that you refer to the work required for a Lua backend as "string munging"?
23:29:27Araqbecause generating LuaJIT's bytecode directly looks fragile
23:29:32*hoverbear quit ()
23:30:05Araqand so I have to generate Lua code
23:30:48Araqand that Lua code is a string
23:31:18Araqyou could generate a Lua AST and then transform that AST to a string late in the pipeline
23:31:38Araqbut that's usually even more work
23:36:20*darkf joined #nimrod
23:49:22*Jesin joined #nimrod
23:59:17VarriountSo, I fixed the off-by-one bug that the js backend's high() had, however I don't know why my change fixed it.
23:59:23VarriountAraq: ^