<< 29-06-2014 >>

00:00:20*darkf_ joined #nimrod
00:00:59*armin1 joined #nimrod
00:01:32*reloc0 quit (Disconnected by services)
00:01:33*clone1018 joined #nimrod
00:01:35*armin1 is now known as reloc0
00:02:11*asterite1 quit (Quit: Leaving.)
00:02:39*Jessin joined #nimrod
00:03:24*clone1018_ quit (Read error: Connection reset by peer)
00:03:24*xenagi quit (Ping timeout: 240 seconds)
00:04:06*mal`` quit (Ping timeout: 240 seconds)
00:04:07*Jesin quit (Ping timeout: 240 seconds)
00:04:18*darkf quit (Ping timeout: 240 seconds)
00:04:18*Amrykid quit (Ping timeout: 240 seconds)
00:04:19*Demos quit (Ping timeout: 240 seconds)
00:04:56*Amrykid joined #nimrod
00:05:09flaviuWow, crypto hashes are pretty crap in performance closer to 0 bytes
00:05:50flaviuAt 8 bytes, they require ~100-500 cycles per byte
00:06:13flaviuStill benchmark later, of course
00:06:15flaviuhttp://bench.cr.yp.to/results-sha3.html
00:06:28*Demos joined #nimrod
00:07:49Araqdom96: so is c2nim an "app"?
00:08:01Araqor a "binary"? or both?
00:08:04*Demos_ joined #nimrod
00:09:56*Roin quit (Ping timeout: 240 seconds)
00:09:56*Demos quit (Ping timeout: 240 seconds)
00:09:57*mal`` joined #nimrod
00:13:54dom96Araq: both
00:14:37Araqso both tag then?
00:14:39*XAMPP-8 joined #nimrod
00:17:27*CARAM_ quit (Ping timeout: 260 seconds)
00:17:31dom96Araq: Sure. I don't enforce the tags to be consistent.
00:18:06*vendethiel- joined #nimrod
00:18:44Araqis my PR automatically updated?
00:18:51flaviuAraq: Yes
00:18:55*vendethiel quit (Ping timeout: 260 seconds)
00:19:01flaviuAny commits to that branch get added to the PR
00:19:10Araqnice
00:20:47*clone1018 quit (Ping timeout: 260 seconds)
00:20:56*TylerE quit (Ping timeout: 260 seconds)
00:21:16*clone1018 joined #nimrod
00:21:43*CARAM_ joined #nimrod
00:22:42*TylerE joined #nimrod
00:28:09*darkf_ is now known as darkf
00:29:27*boydgreenfield joined #nimrod
00:30:11Araqhmm I just got an idea
00:30:50Araqthe parser should special case @`+`
00:31:08Araqthis is an infix operator @ that takes `+`
00:31:12Araqso we can write
00:31:18Araqa @`+` b
00:31:46*armin1 joined #nimrod
00:32:18Araqand the @ is a proc that lifts `+` to sequences, like 'map'
00:32:41*reloc0 quit (Disconnected by services)
00:32:46*armin1 is now known as reloc0
00:32:51Araqso instead of map(`+`, a, b) we get a @`+` b
00:32:58*krusipo_ joined #nimrod
00:33:01Araqopinions?
00:33:18*milosn_ joined #nimrod
00:34:23Demos_kinda neat
00:34:50flaviuLike http://stackoverflow.com/a/8001065/2299084 "Placeholder syntax"?
00:35:14dom96I don't get this
00:35:54*Roin joined #nimrod
00:36:02*reactormonk_ joined #nimrod
00:36:17dom96I don't think 'map(`+`, a, b)' is a correct usage of map
00:36:41*saml_ joined #nimrod
00:36:58*Raynes_ joined #nimrod
00:37:17Araqdom96: it's a binary map then or whatever, I'm sure you get the idea
00:37:42*Jessin quit (*.net *.split)
00:37:44*phI||Ip quit (*.net *.split)
00:37:44*reactormonk quit (*.net *.split)
00:37:44*Skrylar quit (*.net *.split)
00:37:44*milosn quit (*.net *.split)
00:37:45*Raynes quit (*.net *.split)
00:37:45*comex quit (*.net *.split)
00:37:45*tumak_ quit (*.net *.split)
00:37:45*krusipo quit (*.net *.split)
00:37:52*Raynes_ is now known as Raynes
00:37:53*Raynes quit (Changing host)
00:37:53*Raynes joined #nimrod
00:37:54dom96Araq: Can you show me an example in terms of sequences?
00:38:16flaviuAraq: Is "Placeholder syntax" at the link somewhat like you're saying?
00:38:25*Skrylar joined #nimrod
00:38:36boydgreenfieldAny: What version of the compiler should I be using if I want to make use of threadpool and spawn? I’m on v0.9.5 (6/27) on my local machine, and have something working perfectly, but on a remote server v0.9.5 (6.29) I get: lib/pure/concurrency/threadpool.nim(64, 25) Error: undeclared identifier: 'fence'
00:38:43*phI||Ip joined #nimrod
00:38:43boydgreenfield(while compiling)
00:39:26*comex joined #nimrod
00:40:14boydgreenfieldOk… nevermind. I missed the threads option. My bad.
00:40:48*Jessin joined #nimrod
00:40:50*tumak joined #nimrod
00:41:25*q66 quit (Quit: Leaving)
00:44:41*Mathnerd626 joined #nimrod
00:45:03*Mathnerd626 quit (Read error: Connection reset by peer)
00:51:49*lorxu quit (Ping timeout: 240 seconds)
00:52:01*lorxu joined #nimrod
00:53:30*superfunc joined #nimrod
00:56:04*ARCADIVS joined #nimrod
00:58:43*lorxu quit (Ping timeout: 240 seconds)
01:03:15Araqboydgreenfield: ok, so now I know people are already using threadpool
01:03:23Araqinteresting
01:04:21boydgreenfieldAraq: Well, I just did for the first time. Should I be holding off?
01:04:57Araqthe API is stable now afaict
01:05:11Araqit'll get more features though
01:05:34mmatalkais it going to be a copy of TPL?
01:06:53Araqno
01:22:35flaviuAraq: If you're still here, I'm curious why the compiler needs to go through the hassle of bootstrapping. Isn't it possible to just compile the c sources and avoid bootstrapping?
01:23:24Araqthe c sources are platform specific
01:23:36Araqbut yes, that's what ./build.sh does
01:23:50superfuncDoes anybody know of a clean way to design a general menu system for games? I have some ideas, but none are very clean.
01:24:17flaviusuperfunc: Not suggesting anything, but I think some games use html
01:24:50superfuncAh, I mean more for control flow, I already have a pretty slick idea for rendering
01:25:19flaviuAraq: Although I'm sure you've considered it, couldn't the `when` statements be translated to C for it to deal with?
01:25:38Araqit's not only the 'when'
01:25:41flaviusuperfunc: Oh, control flow is easy. Just have an array of function pointers and call each each frame
01:26:14Araqbut the 'when' is perhaps the most problematic feature for it
01:26:30Araqbut what's the gain?
01:26:44flaviuYou don't ever have to worry about compiler depenencies
01:26:59Araqyeah right
01:27:13superfuncA friend of mine just mentioned a FSM for it, I'll consider the tradeoffs between the two ideas. Thanks flaviu
01:27:28Araqbecause C has not to worry about that ... wait a sec? what do you mean autoconf?!
01:28:06flaviuOh, you're right. I didn't consider that different system even on the same architecture have different abilities
01:28:58Araqbtw I don't worry about compiler deps
01:29:13AraqI can easily make these work too
01:29:22AraqI prefer not to though
01:33:46flaviuAlso, has nimrod been tested on big endian machines?
01:34:27flaviuI'm trying to set up one in an emulator, and I don't know if things will work
01:35:21flaviuQEMU is so slow that even bare-bones linux takes forever to boot
01:35:36Araqit works on big endian machines
01:35:58Araqwe regularly run it on powerpc which is big endian, I think
01:36:39flaviuIIRC its biendian
01:37:23superfunc^
01:38:16flaviuSeems the bigger issue is finding a compiler
01:38:35flaviuGCC supports mips64, but it seems arch doesn't have a copy for that
01:50:05Araqgood night
01:50:26superfuncnight araq
01:53:48*boydgreenfield quit (Quit: boydgreenfield)
01:54:59*superfunc quit (Ping timeout: 264 seconds)
01:59:18*brson joined #nimrod
02:17:11*Nimrod quit (Ping timeout: 264 seconds)
02:18:14*Nimrod joined #nimrod
02:21:53*Jessin quit (Quit: Leaving)
02:26:43*brson quit (Ping timeout: 240 seconds)
02:28:02*brson joined #nimrod
02:30:27*Jesin joined #nimrod
02:33:13*xtagon joined #nimrod
02:53:47*shevy quit (Ping timeout: 264 seconds)
03:01:19flaviuqemu is slow
03:01:26flaviuI guess I'll have to leave it overnight
03:11:17*lorxu joined #nimrod
03:26:37*boydgreenfield joined #nimrod
03:27:17boydgreenfieldAnother babel question, sorry. Any way to force it to take from the head of a git repo? (As opposed to the last tag?)
03:28:05*brson quit (Quit: leaving)
03:32:24flaviu​boydgreenfield: Its actually a new feature
03:32:37boydgreenfieldflaviu: How so?
03:32:45flaviuLet me see if I can find the blog post
03:33:10boydgreenfieldflaviu: / I added a tag, and am now getting this error (https://github.com/nimrod-code/babel/blob/509eff97a3f590a8b06af774d772e21d3bc3df06/src/babelpkg/download.nim#L155), despite the version actually being in range
03:33:38flaviuhttp://picheta.me/articles/2014/06/babel--nimrods-package-manager.html
03:33:45*Mathnerd626 joined #nimrod
03:33:49flaviu`babel install commandeer@#26b6c035b6c`
03:34:42flaviuI'm not actually very familiar with babel, I just remember dom mentioning that
03:34:52boydgreenfieldflaviu: Can one specify that in a .babel file though?
03:34:55*Mathnerd626 quit (Read error: Connection reset by peer)
03:35:11boydgreenfieldI’ll open an issue – appears to be a bug in the getPackageInfo() call (may be excluding the last commit)
03:35:35flaviuI think so. set the version to "#6593562364" or whatever
03:36:10flaviuThe sample on that page says `Requires: "nimrod >= 0.9.4, commandeer > 0.1, https://github.com/runvnc/bcryptnim.git#head"` is valid
03:37:51*Mathnerd626 joined #nimrod
03:38:56boydgreenfieldRequires “ssh…#head” works fine, but there’s some error with looking for a tag that’s also the head. I’ll try to track down exactly where.
03:40:16*saml_ quit (Quit: Leaving)
03:51:03*Mathnerd626 quit (Read error: Connection reset by peer)
03:53:13*boydgreenfield quit (Quit: boydgreenfield)
04:00:11*Mathnerd626 joined #nimrod
04:21:37*Mathnerd626 quit (Remote host closed the connection)
04:22:49*XAMPP-8 quit (Ping timeout: 240 seconds)
04:25:47*XAMPP-8 joined #nimrod
04:45:49*XAMPP-8 quit (Ping timeout: 240 seconds)
04:48:02*XAMPP-8 joined #nimrod
05:12:20*Demos_ quit (Read error: Connection reset by peer)
05:13:12*xenagi|2 quit (Quit: Leaving)
05:27:07*XAMPP-8 quit (Ping timeout: 240 seconds)
05:28:56*xtagon quit (Ping timeout: 248 seconds)
05:44:16*terrydog101 joined #nimrod
05:52:13*terrydog101 quit (Quit: Bye)
06:17:47*flaviu quit (Ping timeout: 264 seconds)
06:42:20*io2 joined #nimrod
07:39:25*CARAM_ quit (Changing host)
07:39:25*CARAM_ joined #nimrod
07:39:28*TylerE quit (Changing host)
07:39:28*TylerE joined #nimrod
07:41:13*BitPuffin quit (Ping timeout: 248 seconds)
08:07:54*BitPuffin joined #nimrod
08:12:24*BitPuffin quit (Ping timeout: 260 seconds)
08:41:15*kunev joined #nimrod
08:59:43*io2 quit ()
09:08:18*BitPuffin joined #nimrod
09:12:37*johnsoft quit (Read error: Connection reset by peer)
09:13:12*johnsoft joined #nimrod
09:13:30*BitPuffin quit (Ping timeout: 255 seconds)
09:30:30*Matthias247 joined #nimrod
09:42:23*noam quit (Ping timeout: 264 seconds)
09:51:15*Fr4n joined #nimrod
10:09:03*BitPuffin joined #nimrod
10:09:08*Amrykid quit (*.net *.split)
10:10:27*Amrykid joined #nimrod
10:13:50*BitPuffin quit (Ping timeout: 240 seconds)
10:19:51*io2 joined #nimrod
10:32:37*q66 joined #nimrod
10:32:44*ARCADIVS quit (Ping timeout: 240 seconds)
10:57:11*silven joined #nimrod
11:00:47*ARCADIVS joined #nimrod
11:09:50*BitPuffin joined #nimrod
11:14:20*BitPuffin quit (Ping timeout: 240 seconds)
11:26:28*BitPuffin joined #nimrod
12:07:42*BitPuffin quit (Ping timeout: 245 seconds)
12:09:00*io2 quit (Ping timeout: 260 seconds)
12:20:22*BitPuffin joined #nimrod
12:23:32*untitaker quit (Ping timeout: 245 seconds)
12:29:47*Araq is now known as araq
12:30:16*untitaker joined #nimrod
12:31:50araqping Varriount
12:32:50*darkf quit (Quit: Leaving)
12:35:43*BitPuffin quit (Ping timeout: 244 seconds)
12:36:05*ARCADIVS quit (Quit: WeeChat 0.4.3)
12:39:18dom96araq: you're lowercase again
12:44:07*araq is now known as Araq
12:46:53*kunev quit (Ping timeout: 252 seconds)
12:48:19*io2 joined #nimrod
12:49:40def-const arrays have to start at 0?
12:54:48Araqconst foo: array[3..5, int] = [3,4,5] # should work
12:55:42def-Araq: does not i think
12:56:46def-with let it works
13:05:23Araqwell the compiler itself uses it so I'm puzzled
13:05:29Araqbug report please
13:05:42def-Araq: exactly, i was also wondering when i saw that in the compiler
13:05:53def-I'll open an issue
13:24:30*Mathnerd626 joined #nimrod
13:43:15*flaviu joined #nimrod
14:02:04*Jehan_ joined #nimrod
14:05:56*BitPuffin joined #nimrod
14:10:18*Mathnerd626 quit (Ping timeout: 240 seconds)
14:30:44*asterite joined #nimrod
15:08:43Araqhi asterite how's crystal?
15:09:33Araqdef-: ah that's a known bug with the const eval engine
15:09:47Araqif carr[4] is not evaluated at compile-time it works ...
15:09:53asteriteHi Araq. Good, we are playing with macros :)
15:10:56Araqhow about abandoning this "we want a compiled Ruby" bullshit and helping us instead? ;-)
15:12:42asteriteHehehe :-P
15:13:26asteritehow about abandoning this "i put types everywhere" bullshit and helping us instead? ;)
15:13:29def-Araq: aah, ok
15:15:11Jehan_Heh. :)
15:15:16*Trustable joined #nimrod
15:16:07*io2 quit ()
15:17:35Araqasterite: ah yeah ... these crazy types, much better to conflate every feature mankind ever invented and call it a 'class' :P
15:19:47asteriteSomeone told me there was nothing else after 'class', and I believed him
15:20:00asteriteBut then I found 'struct', 'module' and some other stuff
15:22:50*kunev joined #nimrod
15:24:59Jehan_I'm probably some weird freak in that I like both dynamically (esp. Ruby) and statically typed languages.
15:25:12Araqyup. indeed.
15:25:35*Jehan_ sulks in a corner.
15:25:41Araqbut maybe I programmed in the wrong dynamic languages
15:26:14AraqLua, Python, Lisp, Smalltalk ... hrmm I don't think so
15:26:15Jehan_Well, I did write an Eiffel compiler in Ruby (okay, with a parser/lexer in PCCTS).
15:26:25asteriteI like both, I just don't like the speed of dynamic languages and the lack of feedback when compiling (because… no compiling :-P)
15:27:24Jehan_It depends on the application domain and what constraints you're dealing with.
15:28:08Jehan_For example, code in interpreted languages is generally easier to deploy.
15:28:26Matthias247I'm probably the only one who finds programming in dynamic languages more time-consuming than in static languages ;)
15:28:55AraqMatthias247: no, it's objectively slower IMHO
15:29:41AraqJehan_: I find compiled code easier to deploy, usually
15:29:45Jehan_Matthias247: Really depends on what you're doing and what constraints you're dealing with.
15:30:49Jehan_Araq: Depends on what you can assume about the target environment.
15:31:54Jehan_Bootstrapping the necessary environment for most compiled languages can be complicated, time-consuming, or both.
15:32:44Jehan_Nimrod is the big exception here and it's one of Nimrod's major attractions for me.
15:32:45Araqin what kind of environment can I assume Python 3 is the default but do know nothing about the CPU architecture?
15:33:04Jehan_Then don't write Python 3 code. :)
15:33:56Araqhow is Ruby or others better in this respect?
15:36:19Jehan_Building Python or Ruby from scratch should take a couple of minutes, tops.
15:38:01*io2 joined #nimrod
15:39:04Araqwe have different opinions about what it means to "deploy" then ;-)
15:39:37Jehan_Araq: I'm mostly concerned with platforms that can in the worst case be pretty barebones.
15:39:54Jehan_Or have really, really outdated stuff.
15:40:08Jehan_It's sadly not unusual for HPC environments.
15:43:19asteriteIn a compiled language deploying can't be just dropping an executable?
15:43:25Jehan_I also sometimes need to write code that runs on whatever a particular user has at home.
15:43:27asterite(in -> with)
15:44:21Jehan_asterite: Try cross-compiling on your laptop for a Cray XE?
15:45:24asteriteI don't deploy on that computer that often :)
15:45:50Jehan_Yeah. As I said, I'm dealing with various HPC environments on a not so infrequent basis. :)
15:46:11Jehan_And several of them can really turn your notion of portability upside down.
15:46:41*asterite quit (Quit: Leaving.)
15:48:48Araqhmm wrk includes LuaJIT
15:49:16Araqfatal error: openssl/ssl.h: Datei oder Verzeichnis nicht gefunden
15:49:17Araq #include <openssl/ssl.h>
15:50:44*Demos joined #nimrod
16:01:48Jehan_LuaJIT can be a bit finicky, especially because it doesn't use something like autoconf.
16:02:55Jehan_For what it's worth, I've been using a stripped down Lua in the past to do autoconf-like stuff without having to deal with /bin/sh as the lowest common denominator.
16:03:37DemosOne of the things that attracted me to nimrod was that you could just do autoconf stuff using CTFE and slurp/gorge
16:04:12Jehan_Demos: At the cost of driving up compile time, though.
16:04:23Jehan_Better to write a configure script in Nimrod.
16:06:59DemosI suspect that autotools/make/cmake/whatever is already really slow
16:07:32Demosbesides you get a whole lot of the benifit of autotools with just modules and nimrod's runtime shared library loading scheme
16:08:59Jehan_Demos: It's slow, but that's primarly because of running the C compiler for every single feature.
16:09:06Jehan_And there's no way around that, really.
16:09:45*joelmo joined #nimrod
16:10:18Demosright, I also like the fact that building nimrod on windows does not involve black magic and the souls of long dead FSF members
16:10:59Demosquestion: how do object varients interact with cyclic data structures?
16:11:20Jehan_What do you mean by "interact with"?
16:11:32Jehan_And the point about Windows is well-taken. :)
16:11:36Demoslet me post a gist
16:12:57Demoshttps://gist.github.com/barcharcraz/8ce838a131b7e3e7c74c
16:14:40Jehan_That won't work unless you make it ref object.
16:14:54Demosthe contents of the array I assume
16:14:59Demosor well it would be the same
16:15:12Jehan_It's a data structure that has infinite size.
16:15:27Jehan_Or potentially infinite size at least.
16:15:41Demosyeah, that is what I thought.
16:15:59Jehan_You can't really lay that out in memory without pointers, either.
16:16:09Demosand reflecting on how tagged unions actually work it should have been obvious
16:16:44Jehan_Why not make it a ref object? That's what I tend to do by default.
16:17:27Demosyeah my current code has array[1..3, ref TTest]
16:17:54DemosI like to allow stack allocation of each node I guess
16:18:01Demosnot that it matters that much
16:20:22*asterite joined #nimrod
16:26:44*asterite quit (Quit: Leaving.)
16:28:41*asterite joined #nimrod
16:28:41*asterite quit (Client Quit)
16:40:48*kunev quit (Ping timeout: 255 seconds)
16:54:06*blamestross quit (Quit: blamestross)
17:14:20*Mathnerd626 joined #nimrod
17:14:20*lorxu quit (Read error: Connection reset by peer)
17:14:27*lorxu joined #nimrod
17:28:35*lorxu quit (Ping timeout: 264 seconds)
17:29:37*Jesin quit (Quit: Leaving)
17:36:08Demoscan you iterate through an enum in nimrod? or do you have to cast?
17:37:31fowlyes iterate
17:37:45Jehan_What fowl said.
17:42:59EXetoCit currently iterates over holes also
17:55:23Jehan_Ouch. This must hurt for Mexico.
17:57:47*asterite joined #nimrod
18:06:20*BitPuffin quit (Ping timeout: 240 seconds)
18:06:46*asterite quit (Quit: Leaving.)
18:15:13VarriountAraq: Hm? Release what?
18:15:15*asterite joined #nimrod
18:37:43*ARCADIVS joined #nimrod
18:45:11*Mathnerd626 quit (Read error: Connection reset by peer)
18:48:37VarriountHello asterite, ARCADIVS
18:53:12asteriteHi Varriount
19:05:26*asterite quit (Quit: Leaving.)
19:06:41*asterite joined #nimrod
19:08:48AraqVarriount: well I wanted to release today
19:08:57Araqbut that would be extremely rushed
19:09:23VarriountRelease... like, an entirely new release?
19:09:47Araqyeah
19:10:32VarriountHasn't it only been like... 3 months since the last release?
19:11:04VarriountOr are we now aiming for more frequent releases?
19:11:20*asterite quit (Ping timeout: 260 seconds)
19:11:24VarriountOr is there some big new feature that needs to be released?
19:13:19Araqwell I thought the async stuff is now stable
19:13:33Araqbut apparently it is not ...
19:14:11VarriountIt has a long way to go. Personally, I'd like it to be actually able to utilize more than one core for IO first.
19:14:54Araqwhat about that corruption? you said you detected some wrong GC_unref call?
19:15:29VarriountThat was related to my integration of file monitoring with asyncio
19:15:33Skrylarwell you see
19:15:39Skrylarhe added literate nimrod support lol
19:15:50VarriountSkrylar: Huh?
19:16:29SkrylarVarriount: literate programming is when you hate yourself enough to write your program as though it was a paper document with code segments
19:17:10Jehan_Skrylar: Say what you want, but literate programming can be pretty darn nice for maintaining code.
19:17:22SkrylarJehan_: i *have* toyed around with noweb before :)
19:17:33SkrylarAnd org-mode's version of it
19:18:45Skrylari think its probably better when you're documenting an algorithm like "how does deflate work" though... i already have a lot of normal comments in a file, and literate tools tend to botch up your other ones
19:18:45AraqVarriount: yes, well it sounded like a general bug in asyncio
19:19:02Skrylarother ones = other tools
19:22:54AraqVarriount: using multithreading with async IO *on windows* is almost impossible for nimrod
19:23:48Jehan_Skrylar: In my experience, where literate programming tends to beat comments is at documenting the "big picture".
19:24:03Araqyou have to use multiprocessing instead.
19:24:40SkrylarJehan_: i usually resort to asciidoc at that point
19:25:19Skrylarthere was a tool i wanted for nimrod, but i didn't have a hasher back then; it basically read section flags and gave you hash codes, so it could check if your documentation was outdated on a given topic
19:25:26VarriountAraq: Not if some flexibility is sacrificed...
19:25:27Jehan_Skrylar: Yes, my point is not that it cannot be done, just that it tends to happen more naturally.
19:26:00Jehan_If you write your code as part of a textual description, it tends to turn out differently than when you write textual descriptions as part of the code.
19:26:13VarriountAraq: Why do you think it's nearly impossible?
19:27:02SkrylarJehan_: i have a bad problem where the source information tends to get botched :/
19:27:04Araqbecause Windows essentially says "ok, this callback can be run on any thread"
19:27:25Araqthis breaks every invariant in the runtime
19:27:45VarriountAraq: Window's does not tie a callback to its event notifications.
19:27:53SkrylarJehan_: it might work better the way one of the R tools does it, where the 'literate' parts are written as comments and the tangle/weave tools operate that way; the classic literate way where code is written as asides tends to ruin error logs
19:28:00Skrylar"will the real line 30 please stand up?"
19:28:26Jehan_Skrylar: Heh. Yes, that's one of the more annoying problems whenever you have code generation.
19:29:01Jehan_I am thinking of having the Talis compiler support literate programming natively, which is why that's on my mind.
19:29:05SkrylarJehan_: knitr has support for taking lines which are comments out of an R file, and generating the 'literate' markup from that, so your line information is good to the compiler but your markdown gets shoved to a separate file for processing
19:29:25SkrylarI think Haskell supports direct literate using bird marks
19:29:28*BitPuffin joined #nimrod
19:30:03AraqVarriount: we can talk later about it, I'll be back later
19:30:16VarriountAraq: Sure, I'll be here for the rest of the day.
19:30:45Skrylarfor some reason i've always liked asciidoc more than reST... maybe its just because reST is usually so glued to pythoners
19:31:39Skrylaradoc is basically DocBook XML converted in to a markup format
19:31:53VarriountAraq: I think that, at the very least, some redesigning of the notions that asyncdispatch runs on are going to be needed if multi-threaded async io is going to be supported.
19:32:56Jehan_The problem with the Haskell style is that it screws up autoindent.
19:33:15Jehan_Skrylar: I also prefer asciidoc, I just hate the asciidoc tooling.
19:34:03Jehan_Plus, using XML as an intermediate format. Bleh. XML can diaf as far as I'm concerned.
19:34:41Varriount*die
19:35:18Jehan_Varriount: If you are talking to me, diaf was what I meant. :)
19:36:32SkrylarJehan_: use asciidoctor.rb
19:38:10Skrylari suspect it would not be hard to write a tool which was able to just yank comments out of a file and make every line NOT a comment in to a source code block
19:38:17Jehan_Skrylar: I'm not sure how asciidoctor would fix my issues?
19:38:27Jehan_It's still targeting docbook XML as far as I know.
19:38:43Skrylarboth the python and ruby variants of asciidoc can produce html directly y'know
19:39:02Jehan_I'm not interested in HTML.
19:39:24Jehan_I'm interested in getting a PDF without the absurd docbook toolchain.
19:39:25Skrylareh. well basically nothing produces TeX
19:39:34Jehan_pandoc?
19:39:39Skrylarpandoc is silly
19:39:46Jehan_It also works?
19:39:51Skrylarno, it doesn't
19:39:55Skrylarnot for anything beyond markdown
19:40:04Jehan_Yes, my point exactly.
19:40:12Skrylarpandoc is a wonderful markdown toolchain. everything outside of markdown is claimed to be supported but is bricked
19:40:17Jehan_Which is why in the end I'm frequently using Markdown over Asciidoc.
19:40:25Skrylarit can't even be bothered to process ..include:somefile when immitating reST
19:40:45Jehan_Yeah, my point exactly.
19:40:56Jehan_I want pandoc-like functionality, but for Asciidoc.
19:41:09SkrylarI'm not sure why pandoc doesn't have support for those
19:41:36SkrylarI don't recall if John was overly strict about changing the internal format or not, but pandoc's internal format lacks a lot
19:42:27Jehan_Skrylar: I'm not blaming him. He primarily wanted a tool for Markdown as far as I know, and pandoc handles Markdown extremely well.
19:43:01Jehan_Basically, I think that Asciidoc is the superior format with the inferior tooling.
19:43:17Jehan_And yes, that means that I should stop being lazy and write something myself. :)
19:46:06Skrylari didn't have problems with the docbook chain outside of it being slow
19:46:17Skrylarand i don't produce book formats *that* often
19:46:50Jehan_Skrylar: PDF is what I need most often. Not necessarily in book format.
19:47:24Jehan_Presentation, short white papers to circulate.
19:49:33VarriountAny of you guys know how Erlang deals with asynchronous IO?
19:52:58Jehan_Varriount: Mostly, it doesn't. :)
19:53:08Jehan_Erlang processes are really, really lightweight threads.
19:53:26Jehan_You just have thousands of them, all of them block when I/O happens.
19:53:42Jehan_There are 1+ internal threads that do the actual I/O.
19:54:18Jehan_In principle, Go uses a similar approach.
19:56:37Jehan_The problem is that underneath you (1) need lightweight user-level threads to make things scale and (2) if one of the lightweight threads accidentally does actually blocking I/O, then you may block an entire worker thread.
19:57:13Jehan_That was a big problem with Gnu Pth (which basically does the same for single-threaded programs).
19:57:24Matthias247that's why they have custom I/O librarys. That will yield instead of block
19:57:37Jehan_If any single coroutine in Pth accidentally blocks, the entire program stops.
19:58:05Jehan_Matthias247: Yeah, exactly. The problem is that when there's a bug, especially when you call out to external libraries.
19:58:17Matthias247it's the same in Go, when you configure it for a single thread
19:58:54Jehan_You'd be surprised how many libraries do a blocking DNS lookup, for example.
19:59:38Matthias247hmm, I wrapped it in std::async ;)
20:06:26Matthias247I'm however still not certain what is really the best approach for concurrency at all
20:06:56Jehan_Matthias247: The simple answer is that there is no single best approach?
20:07:09VarriountThere must be a good paradigm/method/concept/way to do asynchronous mult-threaded IO in Nimrod.
20:07:12Matthias247probably yes
20:08:28*Mat3 joined #nimrod
20:08:38Mat3Good day
20:08:41Varriountdom96 wrote asyncdispatch thinking that Nimrod's thread-isolation could be worked around. Even if it can, it will likely be in an akward way.
20:08:57Jehan_Varriount: I honestly haven't yet looked at it.
20:09:35Jehan_Matthias247: Though the biggest problem is that 99% of all languages still don't protect you against race conditions.
20:10:06Jehan_Which is baffling, because the basic solution has been known for over 40 years.
20:10:22Matthias247on the one hand the "modern" async API's like Futures/Tasks, streams, RX are quite nice. But for some state driven things I could imagine that actors also have advantages. And for other things the green threads like in Go
20:11:10VarriountJehan_: What is the basic solution?
20:11:20Jehan_Matthias247: For the most part, these are all pretty similar in that they're basically somewhat higher level wrappers for message sending.
20:11:23Jehan_Varriount: Monitors.
20:11:50Jehan_You associate a lock with a piece of data. Before accessing the data, the lock must have been acquired.
20:12:05Jehan_Depending on your preferences, this can be done statically or dynamically.
20:12:21Matthias247Jehan_: I think the control flow is different
20:13:16Jehan_Matthias247: Yes, but different in the same way that functional and imperative languages are different. Both still rely on the same machinery at a fundamental level.
20:13:39Jehan_It's all CSP with varying types of syntactic sugar, so to speak.
20:14:00Jehan_Mind you, I consider syntactic sugar (and other abstractions) to be pretty important.
20:14:40*joelmo quit (*.net *.split)
20:15:50*untitaker_ joined #nimrod
20:16:08Matthias247In Actors you have one inbox which receives everything in a queue. And your thread's reentry point is always the receive method. While in Futures the future stores an object which can be set by another thread. And that might either restart your thread (with continuations) or you can make a blocking wait on it
20:17:42*untitaker quit (Ping timeout: 255 seconds)
20:18:17*joelmo joined #nimrod
20:19:17Jehan_Matthias247: You can think of typical future implementations as having an actor that processes (data, closure) pairs. At least in the abstract.
20:20:34Matthias247then you would have multiple actors that would work in the same thread on the same data
20:20:41Matthias247which would not be actor-like :)
20:21:14Jehan_Multiple actors would be an implementation detail.
20:22:05Matthias247for me a basic future is a mutex and a condition variable ;)
20:23:44*io2 quit (Read error: Connection reset by peer)
20:24:16*Boscop_ joined #nimrod
20:27:26*silven_ joined #nimrod
20:28:50*darkfusi1n joined #nimrod
20:29:11*Boscop quit (Ping timeout: 260 seconds)
20:29:16*Boscop_ is now known as Boscop
20:29:39*silven quit (Read error: Connection reset by peer)
20:29:39*darkfusion quit (Ping timeout: 260 seconds)
20:31:08*Mat3 quit (Ping timeout: 260 seconds)
20:31:33*Mat3 joined #nimrod
20:34:23AraqVarriount: I consider it a minor problem. Afaict Linux doesn't have the problem which is what most people use for servers. And multi-processing for a windows servers is hardly the end of the world
20:35:24Araqin fact, usually I use multiprocessing instead of threads anyway
20:36:32VarriountSigh...
20:37:09Jehan_Varriount: Working my way through the asyncio stuff now. What's the precise problem with Windows?
20:37:28VarriountJehan_: It's not windows in particular, it's Nimrod.
20:37:42AraqVarriount: that is simply not true.
20:37:47Jehan_Varriount: Hmm, but my understanding was that the problem didn't manifest on Linux?
20:37:56VarriountJehan_: Eh.. What?
20:38:00Araqit's Windows' async IO design
20:38:08Jehan_Okay, then I must have misread something earlier?
20:38:17VarriountJehan_: It's not a problem, it's a design flaw.
20:38:37VarriountAt present, asyncdispatch doesn't make use of multiple threads.
20:38:38Jehan_Okay. What's the precise problem, then?
20:39:01*pafmaf joined #nimrod
20:40:25VarriountDue to Nimrod's GC design, and asyncdispatch's callback based approach to asynchronous IO, a callback can only usually run in the thread with which it was created in.
20:40:51Jehan_I see that.
20:40:53VarriountUnless my understanding of Nimrod's threading model is completely askew.
20:41:04Jehan_Assuming the callback is a closure, yes.
20:41:06Matthias247The Winapi is configurable
20:41:26Jehan_How does the callback end up in a different thread?
20:41:26Matthias247You CAN configure it to dispatch callbacks on any thread that is available to the OS
20:41:44Matthias247that's the APC stuff
20:42:13Matthias247but you can also avoid that and use a completion port, which you query from the thread you like
20:42:17VarriountYes, asyncdispatch doesn't use that stuff. It uses the GetQueuedCompletionStatus and friends.
20:42:44Matthias247I think most implementations do that
20:42:55Matthias247Hijacked threads are a pain. The same as unix signal handling ;)
20:43:26Jehan_Hmm, looking at asyncio.nim, I'm seeing only an import of select from winlean?
20:43:28VarriountMatthias247: But most implementations don't have the kind of per-thread memory management that Nimrod has.
20:43:42VarriountJehan_: Wrong file. Try asyncdispatch.nim
20:43:49Jehan_Varriount: Ah, thanks.
20:44:09Matthias247Varriount: I meant most implementations will use the completion port explicetly and will not use APC
20:47:48VarriountAraq: How is Window's asyncio design the problem? As far as I can see, a future-based callback mechanism for multi-threaded async io would be flawed on any platform.
20:48:14Jehan_So, the customOverlapped thing is meant to hold an arbitrary payload?
20:48:19VarriountJehan_: Yes.
20:49:12AraqVarriount: I thought the APC was mandatory
20:49:18VarriountAraq: Nope.
20:50:19Jehan_And because there's a closure stuck inside that can reference a different heap from where you started out, you have problems, right?
20:50:29VarriountJehan_: Yes.
20:51:08Jehan_Okay, now I have to figure out why the code needs a closure in the first place.
20:51:20VarriountJehan_: The closure is a callback.
20:51:34Jehan_Yes. But you can do callbacks without closures, too.
20:51:50Jehan_I'm trying to understand the underlying design reason for having a closure there.
20:52:02VarriountJehan_: You mean, stateless callbacks?
20:52:25Jehan_Or one with state in a place that is safe to use.
20:53:54OrionPKhey araq
20:54:04Araqhi OrionPK
20:54:16OrionPKwhat did u need sha1 for
20:54:38Araqfor my experimental compiler branch
20:55:11OrionPKfor what purpose though? out of curiosity
20:55:32AraqI'll hash types and ASTs into some sha1 and append that to the generated symbols to make C code generation much more deterministic
20:55:54Araqcurrently we use IDs that are counters instead
20:56:03OrionPKahh I got ya
20:57:46VarriountJehan_: Without some sort of external state, callbacks lose a great deal of their efficacy. The only way I can think of for protecting external state is to deep-copy it so that the callback is the only one that accesses the (copied) state.
20:57:59VarriountAnd that also has it's drawbacks...
20:58:23Jehan_Varriount: Yes, that's what I'm talking about. Putting it in proper shared memory.
21:00:55VarriountAraq: Could a closure be marked, such that the compiler could put everything it references into shared memory, rather than thread local storage?
21:03:25*mwpher joined #nimrod
21:03:40Araqhi mwpher welcome
21:03:58AraqVarriount: dunno, I still don't really understand the problem
21:04:00mwpherThanks :D
21:04:16Jehan_What I'd like to have for this and similar problems is the ability to create shared heaps and a function `withHeap(heap, procvar, payload)`.
21:05:06VarriountAraq: What don't you understand?
21:05:08dom96Araq: The problem is that the standard way to use IOCP with multiple threads is to spawn x amount of threads, and in each one ask the IOCP for notifications of IO completions.
21:05:26Matthias247it's one way, but NOT the standard way
21:05:37dom96Matthias247: What is the standard way?
21:05:46Matthias247there is none, it depends on your use case
21:05:53*Mat3 quit (Quit: Verlassend)
21:06:05*BitPuffin quit (Quit: WeeChat 0.4.3)
21:06:05dom96I'm pretty sure that's how IOCP was designed to be used.
21:06:06Matthias247e.g. node.js will also use one thread for completions
21:06:09Jehan_By the way, how is the originating thread notified?
21:06:20*brson joined #nimrod
21:06:29dom96Matthias247: I can't even think of any other ways of doing this.
21:06:30*io2 joined #nimrod
21:06:33VarriountJehan_: What do you mean.
21:06:49Matthias247I think even the .NEt framework uses exactly one shared thread for querying the IOCP
21:06:52Jehan_Is the callback simply executed when poll() succeeds or what?
21:07:06VarriountJehan_: With regards to Nimrod, yes.
21:07:20dom96Araq: Subsequently the IO completion notifications (the POverlapped object) contain a callback which is executed right after that notification is received.
21:07:23Jehan_Varriount: I see.
21:07:30dom96Araq: We can't execute these callbacks in multiple threads.
21:07:38AraqJehan_: what's your withHeap proposal about?
21:08:23Jehan_I'm not sure what the point is in having a closure callback, though, since the closure can't access the original heap with a thread running in it? Even if it weren't for the heap issues, that would be unsafe.
21:08:43Matthias247querying results from multiple threads is only useful for special use-cases. Because then you have to care for synchronisiation between these results
21:08:44Jehan_Araq: Functionality that I've been toying with implementing.
21:09:23Jehan_Basically, the function would temporarily switch to a different heap and would execute procvar(copy(payload)), where payload is a string.
21:09:48Jehan_In practice, payload would contain a serialized form of structured data.
21:10:09Jehan_It would just be a simple way to have shared heaps and transmit data back and forth.
21:10:21Araqdom96: poll returns -> some callback is picked --> execute the callback via 'spawn'. Problem solved?
21:10:56Jehan_Araq: The problem is that the callback closure can reference data in the original heap.
21:11:19Araqno, spawn will prevent that
21:11:30Araqbut yes, it will create a copy of the data
21:11:51dom96That may work.
21:12:37dom96Although to be honest I worry that spawning a thread for each callback may be somewhat of a shotgun approach to this problem.
21:12:54Matthias247I would simply query the completion port in the event loop of MY thread and execute the stored callback from there
21:12:57dom96Perhaps it makes more sense for the user to decide where to place the spawns?
21:13:00Araq'spawn' doesn't create a thread
21:13:10Araqspawn runs the task on the thread pool
21:13:17dom96still
21:13:34dom96I bet that adds a certain overhead
21:13:49Jehan_dom96: Because arguments are being copied?
21:14:02dom96In any case it seems unbeliavable that it's that simple :P
21:14:43Araqwell yes, it is not
21:14:58dom96Jehan_: possibly. It'll always be slower than simply executing the callback. I'm not sure about how spawn works.
21:15:07dom96But that's my guess
21:15:15Jehan_That still means that the thread issuing the request and the thread doing the poll must be the same one?
21:15:28Araqthe spawned proc might enqueue stuff in the dispatcher
21:15:44Araqand how to do that is the harder part
21:15:51Jehan_If that's the case, there's no cross-heap stuff involved and it should be safe?
21:16:02Jehan_I.e. you shouldn't need spawn?
21:16:53Jehan_If the closure doesn't wander from one thread to another in the first place, then there's no problem?
21:17:10Matthias247Araq: normally you would sent a message to completion port which is then fetched by the dispatcher and executed. But when you forward everything to another thread you probably get an endless loop ;)
21:18:57AraqI can't follow.
21:19:09AraqThread A: runs the polling loop
21:19:23Araq-> runs task on thread B
21:19:24Matthias247I think it's mostly a question which kind of API you want in the end. Dou you want to have something like .NET's async programming model where a callback is executed on the threadpool and you can do blocking waits on the result? Then you need the background thread(pool).
21:19:34Jehan_I was originally thinking there were multiple threads involved, but right now this seems to be a single thread that both queues requests and handles callbacks?
21:19:35*ARCADIVS quit (Quit: WeeChat 0.4.3)
21:20:05*mwpher quit (Quit: mwpher)
21:20:07Matthias247or do you only want futures with continuations and async/await, then you can simply poll the queue from the main eventloop thread. complete the future from there and that will enqueue the continuation on the event loop thread\fs26
21:20:36*retsej joined #nimrod
21:21:10Araqthread B: starts a new async operation. Problem: how does it tell thread A about it?
21:21:18Matthias247Jehan_: dom96 want's to use multiple threads, but you can also only use one and have start and callback on the same
21:21:35Jehan_Matthias247: Oh, so it's about a future design, not the current one?
21:22:14dom96Jehan_: yes
21:22:20Matthias247Jehan_: Sorry, can't tell you how Nimrods current implementation looks in detail
21:22:26Jehan_Then the starting thread still must pass the closure environment to the dispatcher thread and spawn is insufficient.
21:22:55Jehan_Matthias247: I'm reading it myself for the first time right now, not much different for me. :)
21:23:16AraqJehan_: the way I see it:
21:23:40*BlameStross joined #nimrod
21:24:11Jehan_dom96: What you seem to need is (in lieu of a closure) a procvar with an explicit environment.
21:24:25Araqit spawns a worker, the worker also gets some handle/channel which it can use to submit some other IO request
21:24:26Jehan_With the environment being a ptr to the shared heap.
21:25:30Jehan_The problem you still have is that any callback to the originating thread is impossible to do safely.
21:26:10Jehan_Assuming that the callback can be arbitrary code.
21:26:52VarriountWhat if, instead of taking a callback based approach, the Window's IOCP model was used instead?
21:27:58VarriountEg: A queue with notifications of IO completion that is shared among threads.
21:29:24*Trustable quit (Quit: Leaving)
21:29:28Jehan_Varriount: What I'm not sure is how threads would use either design or how they'd benefit from it.
21:30:00Jehan_Making the I/O asynchronous means that the original thread keeps running.
21:30:25Matthias247and then other threads will poll that and get notifications about IOs that they haven't started? :)
21:30:32*goobles joined #nimrod
21:31:05Jehan_So, how is it going to learn of the notification? Busywaiting?
21:31:06VarriountMatthias247: Yes, however, if I'm reading the Windows API correctly, there's a way to put notifications back into the queue.
21:31:12Matthias247I think you should at first clarify for what exactly you need multiple threads
21:31:26Matthias247and then look for a solution therefore
21:31:41Jehan_What Matthias247 said. I.e., I'm not sure there's a clear model for how this is supposed to be used.
21:31:51dom96So that we can scale to multiple cores.
21:32:03Jehan_Once you have figured out how it's supposed to be used, you can implement it.
21:32:13VarriountThe ideal situation for IOCP is in a client-per-thread approach, where multiple consumers are all requesting a resource from one producer. Normally, this kind of approach to asyncronous IO is bad, because of the contention this creates (the resource becomes available, and the thundering herd of threads all try to grab it). In the case of IOCP however, the OS explicitly controls which threads are passed the resource.
21:32:13VarriountInstead of all the threads waking up, only one is.
21:32:21Jehan_dom96: That goes without saying, but how are these threads are supposed to do their work?
21:32:53dom96Jehan_: They are supposed to accept and process as many connections as possible as fast as possible.
21:34:04Jehan_dom96: You don't need async I/O for that.
21:34:40Jehan_More importantly, it doesn't tell us how the threads are supposed to process connections.
21:35:10VarriountAPI Design is hard >_<
21:35:29Matthias247you can still scale to multiple threads by using one completion queue in each thread (each threads eventloop)
21:37:10*Jesin joined #nimrod
21:37:20Jehan_Honestly, a simple design to do that would be to use the current single-threaded polling loop and have callbacks simply use spawn for parellelism.
21:38:03Matthias247with boost asio you can use both approaches: Using one io_service (proactor) from mulitiple threads or using one per thread. I started with the first approach, but it ended up in a synchronization nightmare
21:39:12Matthias247Jehan_: I would spawn on application level. E.g. when you receive a callback or future continuation that your HTTP server accepted a new connection then move that connection to another thread which will handle it
21:39:38Varriounthttp://www.coastrd.com/windows-iocp
21:39:39Jehan_Matthias247: That's exactly what I'm talking about.
21:40:20Matthias247but not automatically move callbacks in the framework to arbitrary threads
21:41:00Jehan_???
21:41:55Jehan_Matthias247: Not sure what that last part was supposed to mean.
21:43:16Matthias247Let's say you do socket.async_read(buffer, size).then((bytesRead) -> { print("Read " + bytesRead + " bytes"); });
21:43:30Matthias247Where should the continuation be executed?
21:43:37dom96boost doesn't have a macro which builds on top of its async stuff like we do.
21:44:01dom96I wonder if C#'s await scales to multiple cores.
21:44:20Matthias247If you have multiple threads running that query for completions than it could be invoked on any core
21:44:55Jehan_Matthias247: Yes, that's what I was talking about.
21:44:55Matthias247dom96: .NET allows you to specify where to start the continuation. There's a SynchronizationContext parameter for Future.ContinueWith
21:45:40Matthias247and async/await will query SynchronizationContext.Current and will set the Continuation to be executed in the same context where await was started
21:46:19Jehan_That's what spawn is supposed to do.
21:46:35Matthias247but that depends on the ability of Task<T> to interact with a scheduler
21:47:33Jehan_The biggest problem that Nimrod currently has here is the very limited way of handling shared data.
21:47:48Matthias247c++ will get that too: future<T>::then(std::executor&, std::function<void(future<T>)>)
21:47:50Jehan_ways*
21:48:15Matthias247you explicetly can specify on which executor/thread a callback will be scheduled
21:48:21*Varriount|Mobile joined #nimrod
21:48:39Jehan_Which is why I keep harping on shared heaps (or some equivalent way of exchanging data). :0
21:50:02*BitPuffin joined #nimrod
21:54:08*vendethiel- is now known as vendethiel--
21:57:41*vendethiel-- is now known as vendethiel
21:58:38Varriount|MobileAnother intrresting article comparing the reactor and proactor: http://www.artima.com/articles/io_design_patternsP.html
22:08:47Varriount|MobileThe standard seems to be to have one proactor per thread, and only share the proactor across multiple threads when a generic thread pool can be easily used.
22:21:44*EXetoC quit (Quit: WeeChat 0.4.3)
22:25:28AraqJehan_: I still don't understand your withHeap ... :-)
22:26:12Jehan_Araq: Temporarily switches the current memory region (TMemRegion) to another one, copies payload to the new heap, executes the procvar argument.
22:26:46Jehan_It's a very barebones way of having shared heaps for dealing with threads having to access structured data.
22:27:02Jehan_shared structured data*
22:27:27Araqhow does it execute the procvar? in a different thread?
22:27:43Jehan_Same thread.
22:28:21Jehan_It basically switches the current thread-local heap temporarily to a different one. Restores it upon return from the call.
22:29:03Araqand the point being?
22:29:11Jehan_The shared heap will contain a hash table or queue or some other data structure that's too big to send across a channel.
22:29:41*ARCADIVS joined #nimrod
22:29:42Jehan_One thread stuffs data in that data structure, others can take it out.
22:30:22Araqok, so I switch to a *shared* heap
22:30:50Jehan_Yup. obviously, you'll also need ways to create and destroy shared heaps.
22:31:24Jehan_I'm not saying you should do that. It would be a way to get that functionality with relatively low effort, as far as I understand the current implementation.
22:31:32Araqthere is an easier way
22:31:57Jehan_Oh, and the reason why I mentioned it was that you can create closures within such a shared heap.
22:32:47Jehan_As I said, it's a pretty barebones approach. That would be my biggest concern, creating something temporary that people may start using and then it would be difficult to get rid of it again.
22:33:57Araqhow do you deal with the locking?
22:34:16Jehan_Heap has an associated lock that withHeap acquires/releases.
22:34:49Jehan_Because that's the only way to access the heap, it should be safe.
22:35:13Jehan_Well, other than putting references in global variables, but that's no different from now. :)
22:35:38Araqwe have a solution for that btw
22:35:50Araqit's already in 0.9.4, but disabled
22:35:55Jehan_Oh?
22:36:20Araqit's an effect 'gcsafe'
22:36:29Jehan_Ah, gotcha.
22:36:59AraqnoSideEffect implies gcSafe and this is really beautiful
22:37:02Jehan_Okay, to be clear, you'd also get a problem here assigning to thread-local variables that the current approach doesn't have.
22:37:56Jehan_What I'm describing is basically the Erlang model again for the case that sending a message to a process has RPC semantics.
22:38:46Araqthe problem with your solution as far as I can see is that the shared heap ... oh wait
22:38:57Araqhmm
22:39:20AraqI see
22:39:42Jehan_Note that you still need to deep copy data in and out of the shared heap.
22:40:06Araqso thread A cannot pass its own heap to thread B, but some other allocated guarded heap
22:40:17Jehan_Correct.
22:40:30Araqand this way you ensure thread A cannot access it either without holding the lock
22:40:53Jehan_Eiffel's SCOOP also essentially works this way, just with a lot of extra mechanisms for making it convenient.
22:41:00Araqsee? and this is why barriers are so sweet
22:41:07Jehan_Huh? :)
22:41:20Araqwith a barrier thread A can pass its own heap to B
22:41:50Araqthe barrier can prevent that thread A runs while its heap is away
22:42:05Jehan_By the way, there's another clever trick: Nobody says all heaps have to use the same allocation strategy.
22:42:17AraqI know
22:42:25Jehan_So you can use a heap with a simple bump allocator for temporary stuff.
22:42:37Araqbut every instruction on the allocation path is measurable
22:43:06Araq(kind of)
22:43:12Jehan_Not sure how a barrier would work, though, I'd have used a condition variable?
22:43:44Araqit doesn't matter how it is implemented really I'm talking about the concept
22:43:50Jehan_Ah.
22:45:16Araqspawn foo(myheap); spawn bar(myheap); sync;
22:45:40Araq--> foo and bar use the lock because of API design
22:46:01Araqspawning thread doesn't access its heap because it syncs
22:46:34Jehan_Incidentally, I'm using the same approach (shared heaps) in my current day job.
22:46:34flaviuI assume the easiest way to make `string|uint64` is to create a variant?
22:46:53Jehan_Because it has to be applied to a C++ code base several 100k in size and it's about the least painful way.
22:47:17Jehan_flaviu: Yes, I'd say so.
22:47:40Jehan_several 100 KLoc*
22:48:18Jehan_variant records ARE sum types. :)
22:48:43Jehan_Different syntax for type t = A of foo | B of bar in OCaml, essentially.
22:48:49Araqflaviu: no the easiest way is to use 'string'
22:48:52Jehan_Okay, minus the reference.
22:49:12Araqa string can encode anything already, including uint64
22:50:03flaviuI guess so, but I think that'll be more work than a variant
22:50:31*XAMPP-8 joined #nimrod
22:51:23*XAMPP_8 joined #nimrod
22:53:41*Matthias247 quit (Read error: Connection reset by peer)
22:54:01Jehan_Araq: Would you be interested in having a binary variant of marshal.nim, by the way?
22:54:11Araqsure
22:54:51*XAMPP-8 quit (Ping timeout: 240 seconds)
22:55:04Jehan_Okay, I may do that later this week.
22:55:35Jehan_I need most of the functionality for something else, may as well share if there's interest.
22:56:14Araqdom96: pulled my babel PR?
22:59:07*XAMPP_8 quit (Ping timeout: 240 seconds)
23:01:49dom96Araq: did you fix your mistakes?
23:02:06Araqyes, I hope so. even read your docs.
23:03:20dom96Araq: It will work with 0.9.4?
23:03:30Araqpretty sure it does, yes
23:04:18NimBotnimrod-code/packages master a026b60 Araq [+0 ±1 -0]: added c2nim and pas2nim packages
23:04:18NimBotnimrod-code/packages master 4af0465 Araq [+0 ±1 -0]: proper tagging
23:04:18NimBotnimrod-code/packages master 9e89ba1 Dominik Picheta [+0 ±1 -0]: Merge pull request #67 from Araq/master... 2 more lines
23:04:24dom96voila
23:04:37Araqthanks
23:05:25Jehan_Ugh, I think I may not be able to do the general version I envisioned. Hmm, will see. :)
23:07:27AraqJehan_: we can get bump pointer allocation with 0 cost in rawAlloc
23:07:35Araqwith a simple trick
23:07:46dom96Araq: it works. You forgot to increment the version output for -v
23:07:52Araqnow I'm thinking about rawDealloc
23:08:09Araqdom96: ok ... thanks
23:08:36Jehan_Araq: Hmm, what I mentioned earlier was just "allocate only, don't bother with deallocating, just throw the entire heap away when done".
23:09:13dom96Finally. A blog post talking about Go's faults http://yager.io/programming/go.html
23:09:44Jehan_Finally? Haven't there been hundreds? :)
23:10:07Jehan_That said, in all fairness, Go does have a major benefit: simplicity.
23:10:32dom96perhaps. It's finally on the front page of HN though.
23:11:00Jehan_I'd personally argue that they oversimplified in some unnecessary places, but they should also get credit for the value of it.
23:11:03flaviuJehan_: Why should I use Go over Java?
23:11:23flaviuJava is also very simple, if you don't go looking for complexity
23:11:23Jehan_Because Java is a pretty heavyweight white elephant?
23:11:31Jehan_Huge startup times, huge memory footprint.
23:11:37Jehan_Java isn't simple.
23:11:43Jehan_Java 1.0 was, in some ways.
23:11:48dom96Pity that blog post doesn't mention Nimrod.
23:11:57flaviudom96: Write one yourself
23:12:03dom96flaviu: Already did.
23:12:19Jehan_dom96: Probably because the author doesn't know it. Nimrod is a bit of a dark horse.
23:12:35flaviudom96: I don't see it on your page, is it elsewhere?
23:12:43Jehan_I know about it because I look at programming languages all the time, especially obscure ones.
23:13:13dom96flaviu: It's not about Go being bad though. It's about why you should use Nimrod.
23:13:14Jehan_Nimrod was just one that I kept using.
23:15:30Araqoh it's about "simplicity" again ...
23:15:33*Araq sighs
23:15:35Jehan_Reading the blog post now and not very impressed with some of the criticism.
23:15:55Jehan_It seems to be another "This language isn't enough like Haskell" posts.
23:16:08dom96Yeah, it's not the greatest.
23:16:20Araqwhat is simple now was advanced 20 years ago
23:16:41dom96good night
23:16:50flaviuJehan_: I still don't understand why I should use Go over Java 1.0. Both are missing generics, any semblance of performance, operator overloading. Main advantage of Go seems to be a bit of type inference
23:17:08Araqsimple means *old*
23:17:26Jehan_flaviu: Performance.
23:17:35Jehan_if you're talking 1.0 :)
23:17:41gooblesGo is repulsive;0
23:17:46Jehan_1.0 didn't have a compiler, it was a bytecode interpreter.
23:17:50AraqI'm pretty sure function calls were an advanced "complex" feature once
23:17:54flaviuJehan_: IIRC go also has subpar performance
23:18:10Jehan_Araq: Yes. Think BASIC (gosub) and COBOL.
23:18:24Jehan_flaviu: Note bytecode-interpreter-bad.
23:18:26Jehan_Not*
23:18:28Araq"This is great, because it forces programmers to ask if they really need that variable to be mutable, which encourages good programming practice and allows for increased optimization by the compiler."
23:18:40Jehan_And, as I said, the JVM has issues with startup time and memory footprint.
23:18:44Araqyes, and I *really* need that mutable state, get lost
23:19:06Jehan_Araq: That's exactly what I meant by "this language isn't enough like Haskell".
23:19:11Araqalso immutability is often very hard to optimize *away*
23:19:25*darkf joined #nimrod
23:20:12Jehan_You can achieve immutability by encapsulating data and not providing methods to mutate it.
23:21:11flaviuJehan_: one of the things that people hate most in java is the "getFoo", "setFoo" crap
23:21:41Jehan_Mind you, there are some places where I like it if you can declare things as immutable, but I don't think that's what the author had in mind.
23:22:03Jehan_flaviu: Not really specify to Java, though. :)
23:22:24Jehan_It's also something that I liked about Sather back in the days.
23:22:33fowlimperative uber alles
23:22:39flaviuAraq: I don't really know about that. The immutability thing can be thrown away after semantic checking; I don't see how it'd hurt things
23:23:28Jehan_fowl: I don't have a very strong preference for functional or imperative programming, but I think that *pure* functional programming is actively harmful.
23:23:57flaviuAlso, immutability makes it easier to reason about a program, if anything. If you declare stuff one way, you don't have to worry about it changing which lets you not have to think of that variable
23:24:11*pafmaf quit (Quit: This computer has gone to sleep)
23:24:22Jehan_flaviu: The thing is that immutability is only one invariant, and a very simple one.
23:24:29*joelmo quit (Quit: Connection closed for inactivity)
23:25:00Jehan_Generally, you express invariants over ADTs by restricting the methods that you can use to mutate them.
23:25:14Jehan_Immutability is the simple case where you have no methods that can do that.
23:26:04flaviuJehan_: I'm speaking of local immutability here mostly
23:26:27flaviueg, I declare a local variable at the top and I know that even if I don't read half the method, it'll be the same
23:27:16Jehan_flaviu: That's one place where immutability comes in handy, yes. See, e.g., let in Nimrod.
23:27:27Jehan_But that's not what the author is talking about.
23:27:50Jehan_The biggest problems with mutable state in my experience have been procedures that change some random global variable.
23:30:18Araqflaviu: immutability loves trees, the hardware loves arrays
23:30:42Jehan_Araq: Amen.
23:31:13Araqthat's why you won't find a functional systems programming language
23:31:18Jehan_Why don't people in computer algebra love Haskell? Because 10000x10000 matrices over finite fields are a PITA in Haskell.
23:31:33flaviuI see what you mean, immutability encourages less efficient data structures
23:31:50flaviunot necessarily that the compiler has a harder time
23:31:59Araqyes
23:32:16Araqor that the compiler HAS a hard time to transform a tree into an array
23:32:27Araqor even things like:
23:32:42Araqlet (data, success) = foo(data)
23:33:01Araq--> let success = foo(addr data)
23:33:08Araqare not trivial to do
23:33:22Jehan_By the way, Araq, I have to write this down so that I can quote it. :)
23:33:47*io2 quit ()
23:33:52Araqalright.
23:36:06Araqwhen C++ was new all this overloading, default parameters etc. was "complex"
23:36:19Araqok C++ still gets lots of blame for overloading
23:36:33goobleswhats wrong wid overloading;0
23:36:35Araqbut Java and C# have it too and Java is "simple"
23:36:51Araqsimplicity doesn't mean much at all
23:36:51Jehan_Overloading is a double-edged sword.
23:37:24gooblesno it is awesome, C++ should let me overlaod every symbol
23:37:34goobleswhy can't i overload my $
23:37:38gooblesWAAHH
23:38:06Jehan_Araq: I think there's a difficulty threshold where a significant percentage of programmers begin to struggle.
23:38:31gooblesstruggle wid what overloading?
23:38:53AraqJehan_: that threshold is not static.
23:39:04Jehan_Which, to get back to an earlier topic, is why I think there is a clear audience for things like Crystal.
23:39:18Jehan_Araq: But only because of the Flynn effect. :)
23:39:26AraqIn 20 years people have other new "complex" features to worry about
23:40:05Jehan_For what it's worth, I think that C++ is objectively too complex for large software systems if it isn't reined in by using a subset.
23:40:23Araqno, that is exactly not the problem.
23:40:38AraqC# is very complex too and nobody really complains
23:40:50Araqit has to be very complex for the simple reason it has LOTS of features
23:41:01Jehan_C# is easier in some very fundamental ways.
23:41:08AraqC++'s problem is the lack of *memory safety*
23:41:08gooblescrystal never heard of it... oh boy yet another language using conservative collector
23:41:12Jehan_For starters, automatic memory management.
23:41:21Jehan_Heh. :)
23:41:41AraqC# has it and proves complexity with safety is entirely workable
23:42:00Jehan_But yeah, the hoops that you have to jump through to deal with C++ lack of automatic memory management is a huge contributor to its complexity.
23:42:22gooblesC++ has automatic memory management;0
23:42:23Jehan_And it lacks (or lacked) features that require other features that are actually more complex.
23:42:44Jehan_goobles: Shared pointers are a joke, if you mean that. :)
23:42:51Araqeven that wouldn't be that much of a problem if it was *safe* after the compiler accepts your programs
23:42:57gooblesno mostly unique_ptrs Jehan
23:43:01Jehan_They're a performance hog, which is why everybody works around them.
23:43:03gooblesshared_ptr is rare
23:43:09Jehan_unique_ptrs aren't automatic memory management.
23:43:16gooblesyes they are
23:43:18Jehan_They're by definition manual memory management.
23:43:32gooblesnope they automatically clean up
23:43:35Jehan_Automatic memory management means you don't have to worry about ownership and lifetime.
23:43:53flaviuAraq: I still don't understand your lambda idea
23:44:11Jehan_Different definitions: Automatic memory management, as it's understood in the literature, means garbage collection, reference counting, and such.
23:44:16flaviufrom last night
23:44:18Araqflaviu: you mean my lifting operator @ ?
23:44:22flaviuYes
23:44:33gooblesyou always must worry about ownership and lifetime, even in a GC language or you will hold on to shit forever
23:44:56Araqit's simply a syntactical feature to make a ternary operator a binary operator
23:45:09Araqa @`+` b
23:45:12Araqbecomes
23:45:17Jehan_goobles: Funnily enough, LISP has had GC since the 1950s and has never worried about either.
23:45:24Araq@(`+`, a, b)
23:45:46flaviuOh, I see
23:46:17Araqthe primary use case seems to be Lifting
23:46:18Jehan_Hmm, is the first argument fixed or can it be a variable?
23:46:20gooblesi'm sure they did, it is pretty easy to tuck away a GC reference that keeps something alive way beyond when it should have died
23:46:38Araqbut I'm sure we'll find lots of others
23:46:58AraqJehan_: can be a variable
23:47:28flaviuTBH I don't really like the way it looks, the @ looks weird
23:47:36Jehan_Araq: Not sure I understand the intended application?
23:48:01Araqflaviu: the @ is just a placeholder for any operator symbol
23:48:23AraqJehan_: lift an operator on the fly to e.g. seqs
23:48:42Araqa + b # + for atomic values
23:48:53Araqa @`+` b # vector addition
23:49:18flaviuAraq: Whats your opinion on http://stackoverflow.com/a/8001065/2299084 "Placeholder syntax"?
23:49:43Araqflaviu: we might get it, _ is currently not even a token
23:49:55Jehan_Hmm, so foldl, basically?
23:50:16Araqfoldl is a reducer
23:50:25Jehan_Oh, I see.
23:50:27Araqbasically like 'map'
23:50:42Jehan_Got it. Not sure why it needs extra syntax, though?
23:51:43Jehan_Big problem with operators for non-obvious uses is that you can't effectively grep or Ctrl-F for them.
23:52:08goobleswho cares thats crappy linux stuff;0 My IDE can find them just fine
23:52:54flaviuI think a good idea is to design the language for idiots, unless there's a good reason otherwise, and I'm not really sure that this would pass that test
23:53:00Jehan_goobles: I'm talking about documentation.
23:53:14Araqgoobles: I agree. :-)
23:53:39Jehan_Try googling for "operator []".
23:53:49gooblesoh like on google?
23:53:53Araqflaviu: design for idiots and you get Go and Java
23:54:11Jehan_Not just Google, any form of unstructured text.
23:54:21*johnsoft quit (Ping timeout: 240 seconds)
23:54:28goobleswell the syntax for overloading operators in C++ is pretty stupid
23:54:33gooblesbut they are still useful
23:54:40*johnsoft joined #nimrod
23:54:54AraqJehan_: and yet [] is already an operator in many many languages
23:54:55Jehan_Think web pages, ebooks, PDFs, generated docs, etc.
23:55:26Jehan_Araq: Yes, and because it's pretty obvious how it's used, that's mostly not a problem.
23:55:32Jehan_[] is for indexing by convention.
23:55:37Araqwhat if I need to google C#'s ?? operator?
23:55:42Araqthat's not overloadable
23:55:49*XAMPP-8 joined #nimrod
23:55:50Araqbut how does that help you googling it?
23:55:56flaviuAraq: But on the other hand, people USE java ;p
23:55:57Jehan_Araq: That's why I'm not a big fan of that, either.
23:56:48Jehan_It's not about overloadability, it's about using a bunch of special characters.
23:56:54Jehan_Or worse, a single special character. :)
23:57:34Jehan_Tools for unstructured text tend to deal in words, at worst with numbers and possibly underscores/hyphens mixed in.
23:57:43Jehan_s/worst/best/*
23:58:27flaviuOf course, you have to balance things out; making constructs powerful while keeping them simple, ideas that are somewhat at odds to each other