<< 03-06-2014 >>

00:00:47*xenagi joined #nimrod
00:02:48*Kazimuth quit (Remote host closed the connection)
00:05:18*filwit quit (Ping timeout: 245 seconds)
00:10:03*caioariede quit (Ping timeout: 265 seconds)
00:18:09*filwit joined #nimrod
00:18:58*q66 quit (Quit: Leaving)
00:40:38filwitJehan_: hey, you around? Wanted to talk about OOP stuff now. Concerning multi-interfaces vs parametric polymorphism, I wrote a simple example of OOP Nimrod code (using some pseudo macros I'm currently designing) which is very trivial. gist: https://gist.github.com/PhilipWitte/b1316b22d69b47bcf593
00:40:47filwitJehan_: It's just one class, two interfaces (traits). Note that I'm using 'method' to denote dynamic dispatch, but these procedures will actually convert to custom dispatch trees (not Nimrod's built-in method dispatch, which only supports single inheritance). My question to you, is what design practice you consider better practice than this, and why?
00:40:54filwitJehan_: I could pass `p` to a generic proc which duck-types it against IAction/IVisual and returns a structure with (which is, I assume, akin to what you mean by 'parametric polymorphism')... but that's (one way to achieve) pretty exactly what is going on here. Except that I'm building it into the trait object which has the benefits of enforcing behavior at class declaration, isolating potential references the class can be used from (helpful for sanity on larg
00:40:55filwite codebases with many devs), and potentially optimizing the code (because dispatch trees are theoretically inlineable and use less memory than procvar lists).
00:41:46filwitJehan_: so i'm interested to hear your thoughts on better design approaches to this simple example.
00:41:58flaviu1filwit: He's sleeping
00:42:46filwitcrap, just realized he's not even here...
00:42:55filwiti wrote all that out for nothing, lol
00:42:58flaviu1Maybe he'll check the logs
00:43:04filwityeah
00:43:08flaviu1Just gist it and send him a link when he appears
00:43:18filwityeah, good idea
00:44:18*caioariede joined #nimrod
00:47:36Varriountfilwit: You could use memoserve
00:48:01filwitdunno how to use that really
00:48:26filwitbut it's fine, i added the comment to the gist, and will just ping him next i'm on
00:49:55filwitthe github Nimrod color highlighter really need to be updated a bit. It's odd how proc names are highlighted red, but not method/template/iterator/converter names
00:50:29filwitplus i think some keywords, like 'using', are missing (which makes sense since it's new
01:02:05*brson quit (Ping timeout: 264 seconds)
01:10:28*caioariede quit (Ping timeout: 265 seconds)
01:17:02flaviu1filwit: Actually, its github's fault things aren't updated
01:17:22filwitflaviu1: ?
01:17:32filwitoh, nevermind
01:17:37filwitforgot what i posted earlier
01:17:54flaviu1Wait, NM
01:18:13flaviu1I thought that they used pygments.rb, which hasn't been updated in a while
01:18:24filwiti imagine we could put in a request to update the style stuff
01:18:36filwitidk how these things work with github though
01:19:10flaviu1filwit: Well, you'd send a PR to pygments, and github might eventually update their pygments version number
01:19:35flaviu1Might get it done faster if you send an email to support after your PR gets accepted
01:19:47filwitah, okay. makes sense
01:20:47flaviu1https://bitbucket.org/birkenfeld/pygments-main seems like the project is active and accepting PRs
01:22:35filwitrecent activity on that: Swift support... people work fast, lol
01:22:46flaviu1Just a bug report, not a PR there
01:22:56filwitah, okay
01:23:04filwitdoes bitbutcket style Nim code correctly?
01:23:35flaviu1Yes, it also uses pygments
01:23:50filwitalso, i noticed you changed away from Gitbook on your NimByExample
01:23:52filwit:(
01:24:02filwitbut what color coding are you using for Nimrod now?
01:24:09flaviu1pygments
01:24:17filwitk
01:24:23*nande joined #nimrod
01:24:35flaviu1If there's anything you miss from gitbook, tell me and I'll see if I can fix it
01:25:07filwiti just thought it in general looked very clean, and i liked the little check-marks next to chapters i've read
01:25:26filwitbut those aren't really work implementing on your own i'm sure.
01:25:52filwityour new one doesn't look bad. I just prefer the old one more
01:26:23flaviu1It might be, I haven't done much javascript and probably should do more
01:26:38filwitbut i'm sure you have your reasons for switching, and ultimately having the site up and complete is worth a lot more than fancy CSS
01:27:46flaviu1My reasons for switching was to avoid a node.js dependency and because gitbook almost made me loose my git repository
01:28:07filwithow did it manage that?
01:28:22filwiti've never used it before
01:28:37flaviu1You specify an output directory, and when it wants to output the files, it just rm's that directory
01:28:50filwitnice
01:28:56filwitlol
01:29:05flaviu1lol, yep
01:30:03*caioariede joined #nimrod
01:30:44VarriountGah. How do I have a closure pass itself to another procedure without passing a pointer to itself as an argument?
01:31:06VarriountIt's like trying to open a crate with the crowbar inside the crate
01:33:07filwituse the gravity gun, better than crowbar
01:33:27filwit(ie, don't know the correct answer to your question, sorry)
01:37:28flaviu1Varriount: Best I can do is evil pointer magic
01:37:53flaviu1But wait, that might not work
01:38:14VarriountAh, got it. Create an empty reference to the closure before the closure definition, and fill in the reference afterwords.
01:40:44VarriountNimrod has made me greatly appreciate closures.
01:46:03OrionPKfowl I added that support for namespaces that I discussed with you previously
01:46:40OrionPKjust added in support for "ns" in your opts argument to class
01:47:03flaviu1filwit: That should be possible without any javascript! the little check is actually a green 
01:48:11filwitflaviu1: i was actually thinking it was a server-side thing, but i suppose it would be easy enough with JS as well.
01:48:18fowlOrionPK, how does it look? ns: sf ?
01:48:31flaviu1Well, I'm going to try to do it with just html and css
01:49:25filwiterr.. yeah.. what am i thinking. Just use a :visited CSS specializer on the link.. dur
01:52:39*nande quit (Remote host closed the connection)
02:15:39flaviu1It turns out you can't be too clever with :visted links for privacy reasons or something like that, so you have to use cookies
02:18:02flaviu1I wonder how gitbook does it, no cookies
02:20:26filwiti'll inspect rustbyexample and see if they're using :visited
02:20:38filwitnot sure what you're saying about :visited not working.. that should work fine
02:22:25filwitdoesn't look like it at first glance
02:22:44flaviu1https://hacks.mozilla.org/2010/03/privacy-related-changes-coming-to-css-vistited/
02:23:56*Joe_knock quit (Quit: Leaving)
02:24:34filwitthanks, was not aware of this change
02:25:40flaviu1May 2010, pretty old. Chrome is the same. Apparently only Opera allows it
02:26:20filwityeah haven't actually used :visted in awhile -__-
02:26:57filwitwell Gitbook *could* be just storing sessions stuff per IP i guess
02:27:04filwitbut that doesn't really sound right
02:27:13filwityou sure it doesn't use cookies?
02:27:24flaviu1I checked my cookies, I didn't see anything
02:27:42filwitthere was some new JS store() thing awhile ago, but i lost all major interest in JS a couple of years ago
02:27:48filwitk
02:28:25flaviu1Yeah, its store
02:41:15fowllol
02:41:24fowl\:: can be used as an operator
02:41:37fowlim going to do it, to anger flaviu1 :>
02:42:28flaviu1fowl: Fold right?
02:43:22*saml_ quit (Quit: Leaving)
02:44:47*kemet joined #nimrod
02:46:10fowlhow would that look
02:47:01*kemet quit (Client Quit)
02:48:21flaviu1:\
02:50:15flaviu1`:\` is the scala operator for foldRight
02:51:54flaviu1Hmph. Firefox's developer console uses the same APIs as javascript, so its actually wrong about visible link styling
02:57:34fowlflaviu1, scala chooses very weird operators
02:59:18flaviu1No, it makes sense. You start with the first element and the accumulator, apply your function. the items you've applied line up like a triangle: 1 = 1 + 2; 2 = 1 + 2 + 3; 9 = 1+2+3+4+5+6+7+8+9;
02:59:24flaviu1See the shape? :P
03:01:15fowlthanks for explaining a left fold to me
03:01:48fowli dont see :\ in this article though
03:02:28flaviu1I never really figured out the difference between a left and right fold anyway
03:02:54fowlthe diff is ((1*2)*3) vs (1*(2*3))
03:04:24fowlnimrods leftfold is diff: foldl(someseq, a + b): seq
03:04:49fowlwell, its a template
03:05:45*caioariede quit (Ping timeout: 252 seconds)
03:09:09filwitit's really annoying for OOP stuff to not be able to forward-declare types :\
03:12:07filwitclass Key: proc new(...) = allKeys.add(this) # `allKeys` is a `seq[Key]` which needs to know what a `Key` is..
03:12:20*caioariede joined #nimrod
03:12:53*def-_ joined #nimrod
03:12:55filwitso i have to make a special {.asis.} pragma for vars/procs in a class body just so i can define this global after the type is defined
03:14:13filwitwould be nice if i could just put `type Key = ref object {.undefined.}` or something above, and alleviate the need for the special prag condition inside the macro
03:14:51filwitthere isn't such a thing in Nimrod that already exist is there? Now that I mention it, I think I saw something like this before...
03:15:50flaviu1filwit: Sorry, I can't find a red color for already visited pages that doesn't look like vomit
03:16:17*def- quit (Ping timeout: 252 seconds)
03:16:18filwitflaviu1: just use a checkmark icon like gitbook does
03:21:23*hoverbear joined #nimrod
03:26:14OrionPKfowl you can look at it in the tests
03:26:25OrionPKfowl class(test, ns: pp, header: "../test.hpp")
03:26:51filwitah ha! found it: http://build.nimrod-lang.org/docs/nimrodc.html#incompletestruct-pragma
03:26:55OrionPKclass(Window, ns: sf, header: sfml_h):
03:26:59filwitnot sure that will work tho
03:27:44fowlOrionPK, i saw
03:30:18fowlOrionPK, i wrote a function so you can use pp.Test
03:31:58*caioariede quit (Ping timeout: 245 seconds)
03:32:08OrionPKfowl pp.Test?
03:32:16fowlhttps://gist.github.com/fowlmouth/4ef59af751acf825ec01
03:32:38OrionPKI just aliased "ns" and "namespace" so you can use either one now
03:33:19fowlOrionPK, as in class(pp.test) for test as "pp::test"
03:34:07OrionPKah cool
03:34:24fowlOrionPK, can i merge in some other options like inheritable, parent, to make it a general-case oop macro
03:34:29OrionPKsure
03:34:53OrionPKmaybe we should start a branch with that stuff even
03:35:43OrionPKwe should clean up and split out the option handler as well
03:35:56OrionPKso it isn't doing all that case logic in the class macro
03:36:06fowlok
03:50:23filwitflaviu1: you sounded like you knew a bit about the GC, and since Araq isn't awake, do you know if the GC ref-counts each ref assignment? Or does the 'deferred' part mean the ref-counting happens only during a scan?
03:52:00flaviu1filwit: I don't know much about the GC in nimrod, just general ideas. I don't know how nimrod does it
03:52:21filwitflaviu1: ah, okay. I'll ask when he gets up then.
04:05:32*xenagi quit (Quit: Leaving)
04:15:58*filwit quit (Quit: Leaving)
04:56:39*kemet joined #nimrod
04:59:16*kemet quit (Client Quit)
05:21:11*nanda` joined #nimrod
05:39:51*xtagon quit (Quit: Leaving)
05:46:37*kemet joined #nimrod
05:47:06*kemet quit (Client Quit)
05:59:11*bjz joined #nimrod
05:59:16*Demos quit (Read error: Connection reset by peer)
06:22:40*bjz quit (Ping timeout: 276 seconds)
06:23:03*bjz joined #nimrod
06:26:13*brson joined #nimrod
06:27:17*brson quit (Client Quit)
06:27:36*brson joined #nimrod
06:32:14Klaufirhow can I print the type of an expression?
06:33:49*hoverbear quit ()
06:38:27*brson quit (Ping timeout: 260 seconds)
06:38:58*brson joined #nimrod
06:47:13fowlKlaufir, with typetraits you can do name(type(exprssion))
06:49:29*bjz quit (Ping timeout: 264 seconds)
07:00:34Klaufirfowl: thanks
07:05:15fowlnp
07:17:56*darkf quit (Read error: Connection reset by peer)
07:18:46*darkf joined #nimrod
07:39:53*kemet joined #nimrod
08:07:54*brson quit (Quit: leaving)
08:11:06*bjz joined #nimrod
08:11:35*kemet quit (Quit: Instantbird 1.5 -- http://www.instantbird.com)
08:15:35*nanda` quit (Ping timeout: 252 seconds)
08:18:53*reactormonk quit (Ping timeout: 252 seconds)
08:21:35*reactormonk joined #nimrod
08:28:38*BitPuffin quit (Ping timeout: 240 seconds)
08:45:17*io2 joined #nimrod
08:51:28*XAMPP-8 joined #nimrod
09:16:04*kemet joined #nimrod
09:18:04*kemet quit (Client Quit)
09:26:25*XAMPP-8 quit (Quit: Leaving)
10:00:06*kunev joined #nimrod
10:16:39*kunev_ joined #nimrod
10:19:11*BitPuffin joined #nimrod
10:20:41*kunev quit (Ping timeout: 264 seconds)
10:21:51*kunev_ quit (Quit: Reconnecting)
10:21:57*kunev joined #nimrod
11:31:22*Jehan_ joined #nimrod
11:41:11NimBotnimrod-code/packages master 1c1adea Grzegorz Adam Hankiewicz [+0 ±1 -0]: Adds gh_nimrod_doc_pages to list.
11:41:11NimBotnimrod-code/packages master 022ed59 Billingsly Wetherfordshire [+0 ±1 -0]: Merge pull request #62 from gradha/pr_gh_nimrod_doc_pages... 2 more lines
11:50:47*vendethiel quit (Read error: Connection reset by peer)
11:54:10*ehaliewicz quit (Ping timeout: 276 seconds)
12:00:18*vendethiel joined #nimrod
12:05:23*kemet joined #nimrod
12:06:49Klaufirhttp://nimrod-lang.org/manual.html#modules : here in the example, after 'import A' it uses A.T1, but after 'import B' it doesn't use B.p(), just p()
12:07:16Klaufirso, does import pullote the namespace of the module that issues the import statement?
12:09:00EXetoCyes
12:09:02EXetoChttp://build.nimrod-lang.org/docs/manual.html#from-import-statement
12:10:00*kemet quit (Client Quit)
12:10:11Klaufirmaybe something is wrong with me, but I can't find the answer in the section you linked
12:10:46Klaufiror you just show it as best practice?
12:11:01*vegai left #nimrod (#nimrod)
12:11:03*saml_ joined #nimrod
12:11:04EXetoC"t's also possible to use from module import nil if one wants to import the module but wants to enforce fully qualified access to every symbol in module."
12:11:07EXetoCthat seems relevant
12:11:38Klaufiryes :)
12:11:48Klaufirthank you
12:16:59*untitaker quit (Ping timeout: 265 seconds)
12:22:09*untitaker joined #nimrod
12:46:17*saml_ quit (Ping timeout: 252 seconds)
12:48:50*darkf quit (Read error: Connection reset by peer)
12:58:47*caioariede joined #nimrod
13:00:09Jehan_Klaufir: Yes, Nimrod's "import m" is equivalent to "from m import *" in Python.
13:00:48Jehan_However, if you have the same procedure defined in both and attempt to use it unqualified, Nimrod will raise an error when trying to compile that.
13:01:49Jehan_Same procedure meaning: same name and same type signature (otherwise, normal overloading resolution applies).
13:12:19*caioariede quit (Ping timeout: 260 seconds)
13:22:57*vendethiel quit (Read error: Connection reset by peer)
13:24:31*vendethiel joined #nimrod
13:24:52NimBotAraq/Nimrod devel 56a912f klaufir [+0 ±1 -0]: adding header pragma for printf ffi example
13:24:52NimBotAraq/Nimrod devel 24c0044 klaufir [+0 ±1 -0]: header pragma set to '<stdio.h>' in importc section
13:24:52NimBotAraq/Nimrod devel ead2d4c Dominik Picheta [+0 ±1 -0]: Merge pull request #1238 from klaufir/devel... 2 more lines
13:30:11*Jehan_ quit (Quit: Leaving)
13:38:44flaviu1the parser is a lot less scary than the VM
13:42:50*vendethiel quit (Read error: Connection reset by peer)
13:45:49*vendethiel joined #nimrod
13:57:33*EXetoC quit (Quit: WeeChat 0.4.3)
14:15:39*BitPuffin quit (Ping timeout: 252 seconds)
14:18:54*caioariede joined #nimrod
14:28:40flaviu1Is anyone here familiar with the compiler? I made it so that `(` is a valid identifier, but it says that its redefining '('. Are accent quoted identifiers usually surrounded by accents?
14:32:38*Jehan_ joined #nimrod
14:56:40*Johz joined #nimrod
15:00:07*EXetoC joined #nimrod
15:24:46*caioariede quit (Ping timeout: 276 seconds)
15:36:24*rixx joined #nimrod
15:36:29*kunev quit (Quit: leaving)
15:41:56*bjz quit (Ping timeout: 255 seconds)
15:49:58*caioariede joined #nimrod
16:07:01*Johz quit (Quit: Leaving)
16:15:37*xtagon joined #nimrod
16:17:00*hoverbear joined #nimrod
16:21:47*caioariede quit (Ping timeout: 252 seconds)
16:31:05*BitPuffin joined #nimrod
16:41:46*Matthias247 joined #nimrod
16:48:57*caioariede joined #nimrod
16:58:03*xtagon quit (Quit: Leaving)
17:00:03*xtagon joined #nimrod
17:08:42*brson joined #nimrod
17:09:33*q66 joined #nimrod
17:09:33*q66 quit (Changing host)
17:09:33*q66 joined #nimrod
17:28:47*q66 quit (Ping timeout: 252 seconds)
17:28:57KlaufirAre sequences similar to C++ vector?
17:30:41*Changaco joined #nimrod
17:31:10dom96yep
17:32:02Klaufirdom96: and the size is always some factor of 2 ?
17:32:07AraqKlaufir: updated your PR yet?
17:32:17AraqKlaufir: no. why would it?
17:32:34KlaufirAraq: PR - give me a second
17:32:53KlaufirAraq: I mean, that for the C++ for 9 elements they allocate space for 16
17:32:55*q66 joined #nimrod
17:32:55*q66 quit (Changing host)
17:32:55*q66 joined #nimrod
17:33:28KlaufirThe vector works this way: when running out of the allocated space, it allocates 2 times as much as before
17:33:35EXetoCsize and capacity respectively. seems like common terminology
17:33:36AraqKlaufir: I know what you mean but the factor of 1.5 has been proven to be superior iirc
17:33:55Jehan_Araq: Correct.
17:34:11flaviu1Araq: Accent quotes don't work in enums, how should I handle that? Apparently, "redefinition of '('"
17:34:23KlaufirAraq: So, for sequences its 1.5?
17:34:31EXetoCs/size/length
17:34:31AraqKlaufir: yes
17:35:15Araqflaviu1: not sure what your problem is
17:35:21Araqgist me your diff please
17:35:38Jehan_To be precise, the factor should be the golden ratio or a bit less.
17:36:02Jehan_The problem with a factor of two is as follows:
17:36:14flaviu1Araq: https://gist.github.com/124c4a4367254288701e
17:36:35Jehan_When you grow your memory to size 2^n items, you've so far allocated 1+2+…+2^(n-1) = 2^n-1 items.
17:36:45Jehan_Which means that you can't reuse your memory and are wasting half of it.
17:36:52KlaufirAraq: I am very new to github, should I open a new PR or somehow update the last one ?
17:37:27flaviu1What I've done is collapse all the special bracket cases in the tokenizer, which seems to work since `{` is a valid proc name
17:38:26Jehan_1.5 is frequently used because it can be calculated fast (*3/2).
17:38:55KlaufirJehan_: You know that modern operating systems don't actually allocate the memory unless you write to it, so even when having 2 GB malloced and 1.1 GB used, only 1.1 GB physical mem will be used, but correct me if I am wrong here.
17:39:05flaviu1Araq: Sorry, that gist was missing my uncommited changes: https://gist.github.com/a5f3bbb520c4f0dda829
17:39:13Jehan_Klaufir: After the memory has been used, it will be mapped.
17:40:06Jehan_And yes, I know that.
17:40:46Araqflaviu1: you need to replace add(result, newIdentNodeP(getIdent(tokToStr(p.tok)), p)) with some accumulator string and then do: add(result, newIdentNodeP(getIdent(acc, p))
17:41:26AraqKlaufir: you can update the last one but don't ask me how
17:41:57Araqyou can ask Changaco though, he built github ... right?
17:42:26flaviu1Klaufir: Just push to your repo, as before, and the PR will be updated
17:42:32KlaufirJehan_: given that unwritten memory will not be mapped, why should people worry about the factor of 2 vs 1.5 . What am I missing here?
17:43:01KlaufirAraq: dom96 seems to have fixed it in the meantime: https://github.com/Araq/Nimrod/pull/1238
17:43:43Jehan_Klaufir: Because you resize a memory block once there's no space left in it, meaning that all locations have been written to and it has been mapped.
17:44:05dom96Yeah, I merged the PR.
17:44:12Jehan_That's assuming that memory isn't already mapped upon allocation, e.g. because the language demands initialization with zeros.
17:44:22Araqdom96: nice :P
17:44:52flaviu1Jehan_: How about calloc?
17:44:57KlaufirJehan_: I see, thanks
17:45:06flaviu1Does that do anything to avoid initialization?
17:45:43Jehan_flaviu1: On the contrary, calloc() requires that the memory is initialized with zeros.
17:46:05flaviu1I know, but does it typically hook into the OS to avoid mapping the memory>
17:47:09Jehan_flaviu1: on POSIX, you can rely on mmap() doing that for you when memory is being paged in and when it's fresh memory.
17:47:39Jehan_But in general, memory will be reused a lot, and previous code may have written arbitrary values.
17:48:05Jehan_Of course, some allocators also unmap blocks upon deallocation.
17:48:24Jehan_And OpenBSD somewhat radically uses mmap()/munmap() for ALL big pieces of memory.
17:49:00Araqnimrod's memory manager does that too, if the OS supports it
17:49:06Araqmost don't
17:49:30Araqmunmap() really has lots of strange behaviour
17:49:40Jehan_malloc() generally makes it easier, because there's no requirement that the memory contains any specific values.
17:50:17*rixx left #nimrod ("Foyfoy!")
17:50:35Jehan_flaviu1: By the way, I saw what filwit said in the logs.
17:51:06Jehan_flaviu1: You can point him at https://gist.github.com/rbehrends/9bbeef9ee43260b263ee if you see him.
17:51:17flaviu1Ok, I'll do that
17:51:52Jehan_In general, you need subtype polymorphism only if a memory location (variable, hash table entry, whatever) can actually contain values of more than one type.
17:52:09*hoverbear quit (Ping timeout: 240 seconds)
17:52:11Jehan_That happening AND requiring multiple inheritance/interfaces is pretty rare.
17:53:38Jehan_Most of the time you can use parametric polymorphism. For most of the cases where parametric polymorphism isn't good enough (e.g., abstract syntax trees), variant types or single inheritance are sufficient.
17:54:09Jehan_That's why the absence of MI in Nimrod doesn't bother me a whole lot.
17:54:14Araqplu you can always implement MI yourself with enough casts
17:54:23Araq*plus
17:54:30Jehan_Of course, if you don't have parametric polymorphism AND no MI, then you may be screwed.
17:54:55Araqyou can even hide the 'cast' in a 'converter'
17:55:24Jehan_Parametric polymorphism also handles some cases that subtype polymorphism doesn't (such as binary operators).
17:56:05Jehan_Araq: Or type adapters/views.
17:57:01AraqJehan_: I found a way to make Promise a ref without additional overhead ... or at least no obvious overheads
17:57:15Jehan_Araq: Nice!
17:57:34Jehan_I still would campaign for renaming the type. :)
17:58:27Araqthe implementation kind of sucks though .... I need a static array per thread and if it's full the thread that 'adds' to it needs to wait until the owning thread cleaned it up
17:59:00Araqthis also means that I need a condition var to signal "empty again"
17:59:18Araqand for this condvar I need a broadcast op
17:59:42Jehan_But?
17:59:44*hoverbear joined #nimrod
18:00:02Araqwell I need to lookup how the WinAPI does a broadcast
18:00:12Jehan_You can always chain normal signals.
18:00:20Jehan_That avoids the "thundering herd" problem, too.
18:00:41Jehan_http://en.wikipedia.org/wiki/Thundering_herd_problem
18:02:02Araqhmm
18:02:42Matthias247Araq: I don't know what exactly you are working on but what speaks against implementing futures and promises like C++(17?) does
18:03:13flaviu1Araq: That fixes one bug, but "redefinition of '('" is unaffected. Interesting thing is that `(` works fine as a method name, but not as an enum value.
18:03:24Matthias247with the new proposals and the ability to do future.then(...) it's quite nice
18:03:57AraqMatthias247: C++ uses a shared heap, we don't
18:04:59*brson quit (Ping timeout: 265 seconds)
18:05:06Matthias247so the transmission of the value from one thread to another is the problem?
18:05:17Araqyes
18:05:41Matthias247ok, and you have to store a reference to the future in the promise (which can also be in another thread)
18:05:42Jehan_Matthias247: Also, for the historical record, it's not really a C++ invention.
18:05:50Matthias247Jehan_: I know
18:05:54Jehan_I think C# popularized most of the ideas.
18:06:36AraqMatthias247: the actual problem is hat the very same thread that allocated, neeed to perform the dealloc
18:06:41Matthias247yes, the Task<T> things. And Dart even calls them Future<T> with the same semantics
18:08:12Matthias247hmm, you could restrict them to only work within a single thread (like Dart, JS, ...) and only allow other mechanisms for inter-thread-communication
18:08:24Jehan_Matthias247: The concept is popular for a reason, namely that it works. There are still plenty of devils to be found in the details, of course. And Araq wants more than just implement such a model, if I understand him correctly.
18:08:53Jehan_Matthias247: But that's pretty much the point of them. :)
18:09:08AraqMatthias247: that's exactly what we are doing. we Future[T] for the async stuff and Promise[T] for the inter-thread-communication
18:10:32Jehan_I'll also add that while C++ has the benefit of a shared heap, RAII-based reference counting and concurrency go together like gasoline and fire. :)
18:10:57Matthias247oh, ok. Does this then mean that Promise has a different meaning than the c++ promise (which completes a future<T>)?
18:11:18Araqflaviu1: I don't know about your redef problem. tried a .pure enum?
18:11:21dom96I don't think our Future and Promise distinction is similar to C++'s at all.
18:11:33EXetoCAraq: how much work is it to make dom.nim usable you think? perhaps the first thing should be to allow the vars in the module scope to be referenced: "document* {.importc, nodecl.}: ref TDocument ..."
18:11:57Jehan_dom96: I haven't looked at the async stuff yet at all, but I suspect that's because you use it differently from their established meaning?
18:11:57AraqMatthias247: yes, it's completely different. Better name for "Promise" is still welcome
18:12:10dom96But I should probably learn more about the semantics in C++
18:12:16Jehan_Araq: I suggested Pending[T].
18:12:30flaviu1Araq: Seems to work, but strange that enums with names like that are invalid while procs are not.
18:12:50Araqflaviu1: it's not invalid, you have redefinition problem
18:13:00Araqyou have some other '(' somewhere
18:13:16dom96EXetoC: What's unusable about the dom module?
18:13:33flaviu1Araq: You're right, my bad
18:13:54dom96C++ also has std::async
18:14:14Matthias247dom96: the future is similar. But the c++ version is currently restricted in the domain that nimrod covers but in the end will be a littlebit more general
18:14:19EXetoCdom96: did you read the last part? that seems essential
18:14:20*kunev joined #nimrod
18:14:54Matthias247dom96: do you know this? If not - it's a good lecture ;) http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3784.pdf
18:15:16EXetoCthose vars can't be referenced because they aren't defined in the generated js source
18:15:20Matthias247and it goes together with this, what is similar to your Dispatcher: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3785.pdf
18:15:29AraqJehan_: meh fine. I'll rename it to Pending
18:15:58dom96Araq: I prefer Promise
18:16:30dom96Matthias247: interesting thanks
18:16:31Jehan_Araq: Not trying to push you. I don't think Pending is a great name, but Promise is (1) CS slang and (2) the established meaning has slightly different connotations.
18:16:59Jehan_Which means that it's likely to confuse both Joe Average Programmer types and experts.
18:17:56Matthias247dom96: the continuation functinonalitys with future.then, the unwrap and the ability to combine multiple futures into one (future.when_any) would for sure also be helpful for nimrod
18:18:57Jehan_So, the async futures are strictly for a single-threaded case?
18:19:44dom96nope
18:20:02Jehan_Hmm, then I misunderstood what Araq said earlier.
18:20:30dom96In the future the dispatcher will hopefully try to use all your CPU cores which means multiple threads.
18:21:10Jehan_In that case, it sounds like the concepts should be unified.
18:21:24*askatasuna joined #nimrod
18:21:33Jehan_And not have one task/future/promise in one place and a somewhat different, but incompatible one, in another.
18:21:39dom96Indeed. That is what I think too.
18:23:04Matthias247using multiple threads in a dispatcher is not easy. If you then manipulate things from the "dispatcher callbacks" you have to care about synchronization.
18:23:05dom96Matthias247: Doesn't 'await' mostly remove the need for 'then', 'when_any' etc?
18:23:14Araqasync futures ARE strictly for the single-threaded case
18:23:40Matthias247dom96: no. For example you may want to start an async operation and a timeout in parallel. And wait for which completes first
18:23:45Araqdom96 simply is overly optimistic we'll get a shared memory GC
18:24:16Matthias247dom96: or you want to send data to 100 clients and wait for all to complete with a single operation
18:24:29Matthias247you can then use await to wait on the combined future
18:24:46Matthias247that's the reason why C# provides both possibilities
18:25:05Jehan_dom96: The classical use case is as follows: You have two algorithms to solve a problem, but cannot tell which one is better. So you run both in parallel, use the result of the one that finishes first and stop the other.
18:25:12dom96Matthias247: indeed that's a good use case.
18:26:24dom96Araq: Alright. How do I spread my async code across all cores then?
18:26:28Jehan_I've also implemented a concept of milestones, which is sort of like a barrier, except all the ways in which barriers are broken.
18:27:32Jehan_Milestones is basically an abstract value that tasks can contribute to on completion; once a threshold is reached (not necessarily numerical, milestones can be sets, for example), the execution of another task is triggered.
18:27:52Araqdom96: I don't know.
18:28:03Matthias247dom96: that's hard and works only for some use-cases. I think you would need different kind of dispatchers
18:28:31Matthias247like the C++ paper describes. One that uses only a single thread and others that can use multiple-threads (encapsulate a thread-pool)
18:28:46Araqdom96: every async 'task' can use 'parallel' statements for instance
18:30:00Jehan_dom96: The only tricky part is really sending data between threads. And that's not really all that tricky (the hard part is efficiently determining the lifetime of objects that multiple threads may be accessing).
18:30:23dom96Araq: My plan was to spawn x number of threads (where x is the number of cores) and have each thread poll the dispatcher.
18:30:48Matthias247offloading some calucations into other threads works quite good. But doing the whole logic in a bunch of threads ends often in a mess. That were my experiences
18:31:34dom96That would likely lead to race conditions. I haven't really given much thought to the issues that it creates. I simply wanted to try it but that odd TLS emulation bug stopped me.
18:31:46Jehan_dom96: The first problem you have with that is where to store the dispatcher.
18:32:10Matthias247dom96: That was the strategy that I first used with boost asio. Have one io_service and poll it from multiple threads
18:32:13dom96Jehan_: In a shared global variable.
18:32:21Matthias247I really would recommend noone to do that :)
18:32:24dom96A bigger problem is the closures.
18:32:29Jehan_dom96: And how do you allocate memory?
18:32:41Jehan_shared heap, or thread-local heaps?
18:32:51dom96shared
18:33:16Jehan_In which case the lock around the shared heap's allocation function can become a bottleneck.
18:34:04dom96It may in fact be simply too complex.
18:34:15dom96brb
18:34:46Jehan_Can be circumvented by allocating equal-sized memory chunks in batches and having threads allocate from a free list.
18:35:01Jehan_There are other solutions, too, but you have to think about how you want to do that.
18:36:33Matthias247hmm, unfortunatly Swift didn't show up what they do regarding concurrency. At least not in the short overviews
18:37:45AraqJehan_: please review https://gist.github.com/Araq/a37c1b27e1900bd2ca2a
18:38:24Matthias247but I guess it will be similar to objective-c. Each thread get's mainloop with GCD and you can push closures to be executed on any of them
18:38:32Araqand I think I should start to use atomicLoad/Store ...
18:41:13Jehan_Araq: Still looking at it, but the problem with any atomic operations is portability.
18:41:31flaviu1Jehan_: No portability issues with LLVM IR :P
18:42:06Araqwhy? it's 2014. The C compiler either supports atomic ops or is irrelevant
18:42:17Jehan_flaviu1: If you want to/can generate LLVM code. I hadn't realized they had added that.
18:42:39flaviu1Jehan_: Yes, and its very fine-grained
18:42:40Jehan_Araq: Is it part of the C standard yet? C++ yes, but C?
18:42:50Jehan_flaviu1: Define fine-grained?
18:43:23AraqC11 has it too, but it's irrelevant, every C compiler we officially support has them
18:44:31Jehan_Araq: I don't think your code is safe with respect to memory reordering.
18:44:39flaviu1Jehan_: "NotAtomic, Unordered, Monotonic, Acquire, Release, AcquireRelease, SequentiallyConsistant". I can't remember what they mean, but I remember being impressed. http://llvm.org/docs/Atomics.html
18:45:29Araq# XXX we really need to ensure no re-orderings are done
18:45:31Araq # by the C compiler here
18:45:35Araq:P
18:45:48Jehan_Not the C compiler.
18:45:59Jehan_The processor.
18:46:25Araqwell I will use atomicLoad/Store eventually which imply a fence
18:47:06Araqbut only for the b.interest location, the rest should be fine
18:47:13Jehan_It's not just b.interest, also the other memory accesses.
18:47:49Araqlike?
18:48:15Jehan_Like "inc b.entered"
18:48:34Jehan_That may not become visible to other threads for a while.
18:48:59Jehan_You're trying hard to avoid mutexes, but I'd first try to make the code correct with mutexes.
18:49:06Jehan_Then you can try to optimize them away later on.
18:49:42Araqwell ok, actually I assume stores to words are atomic
18:50:06Araqwhich is true for any sane processor in existance
18:50:16Jehan_They are atomic, but processors may reorder them with respect to other stores or loads.
18:51:20Jehan_A modern x86 processor can have a few dozen loads/stores in flight at any given time, and they're one of the better behaved types (ensuring TSO).
18:51:32Jehan_Unlike, say, ARM processors.
18:51:48Araqalso I don't avoid mutexes, I also avoid creating a condition variable + signal
18:52:39Jehan_For example, barrierLeave may decide to signal because it hasn't seen an increment of b.entered yet.
18:53:08Jehan_You're still signaling when everything's done, right?
18:53:20Araqnot necessarily
18:53:51AraqI still don't see the problem with memory reorderings
18:53:54Jehan_The code should be exactly the same otherwise even if you put lock/unlock around all the barrier procedures.
18:55:17Jehan_Araq: For example, barrierLeave signaling because it falsely thinks that b.left == b.entered because it hasn't seen the effect of 'inc b.left' yet.
18:57:43Araqshouldn't atomicInc prevent that?
18:58:14Jehan_Araq: That may affect how b.entered is perceived (depending on what memory barrier it actually uses and when).
18:58:50Jehan_Eh, I've switched entered and left around there.
18:59:12Araqöhm
18:59:22Jehan_But there are zero guarantees about when other cores will see the effect of "inc b.entered".
19:00:06Jehan_inc b.entered will write the value to the L1 cache. When and how it will be propagated to L3 or main memory is something that you can't tell.
19:01:02Jehan_The point of surrounding critical regions with lock/unlock is twofold: (1) mutual exclusion and (2) making sure that any changes you made within the critical region will be seen by other cores.
19:02:10Araqmeh I'd rather insert fence instructions
19:02:44Jehan_Araq: That has premature optimization written all over it.
19:03:41Jehan_There are thousands of ways you can shoot yourself in the foot working without locks.
19:04:00Jehan_Unless there's an established need, I'd recommend avoiding it.
19:05:00Jehan_Monitors (i.e., mutexes + condition variables) work. They have strong guarantees, and they've been tested to a fare-thee-well.
19:05:33Jehan_They may occasionally be inefficient, in which case you carefully and selectively replace them with a lockfree implementation.
19:07:21*flaviu1 quit (Remote host closed the connection)
19:07:22Jehan_But there's nothing cool or inherently superior to working without locks. More often than not, it just leads to buggy code.
19:08:14AraqI'll consider it
19:08:28Jehan_I recommend reading "Double-Checked Locking is Broken" (http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html).
19:08:45Araqyeah I know about that
19:08:53Jehan_Tons of people think they can make it work, but very few understand how.
19:09:21Araqand yet who really had a bug with it
19:20:44*brson joined #nimrod
19:22:06AraqJehan_: why would I even use 2 counters when I use a lock for it anyway?
19:22:55Jehan_Araq: Depending on how you implement it, you may be able to get away with one counter.
19:23:14Jehan_Araq: Two-counter implementations may be useful for reusing the data structure, though.
19:24:05dom96back
19:24:06Jehan_Because their "done" state is different from the "just started" state.
19:25:02dom96In regards to parallelising async, if we want to beat Go on benchmarks we need to do it.
19:28:03Araqdom96: we can come up other benchmarks though
19:28:48Matthias247dom96: go will also only perform good when the application is parallelized (split into many goroutines)
19:29:08Matthias247that probably easy for servers (on routine for each client), but still hard for anything else
19:31:06AraqJehan_: the check in closeBarrier is often true and then we can avoid quite some overhead. it's "premature" because I didn't measure it but then it's a stdlib implementation and you never know what it is used for
19:31:19Araq*premature optimization
19:31:41*flaviu1 joined #nimrod
19:32:07Jehan_Araq: A fairly safe way of doing this without locks is to store all the data in a single word that can be updated with a single CAS operation. That does limit you to something like a max value of 2^15 for the counters on 32-bit systems, though.
19:32:30dom96Matthias247: It seems that is good enough to impress a lot of people :P
19:33:36Matthias247dom96: node also impresses people - and I have no clue why ;)
19:33:44Jehan_I'd still go with a simple and robust implementation first. "Make it work, make it right, make it fast."
19:35:21Araqthe problem is: It is robust in practice no matter what I do :P
19:35:57Araqit already works, so I'm after "make it fast"
19:36:17Jehan_In practice meaning "on x86 processors"?
19:36:54Araqin practice means on 86 for toy programs
19:37:52Jehan_The thing is that something working on x86 or SPARC processors may give you an impression of reliability (because TSO can screw you over in only very limited ways) that does not apply to other hardware.
19:38:37Jehan_In any event, how big is the maximum number of jobs that you can have participating in your barrier?
19:39:17Jehan_I.e. how high can the counters go at most?
19:39:57Araq2 billion on a 32bit arch or something
19:40:11Araqit's hard to tell
19:40:18Jehan_On 64-bit?
19:41:53Araqwell usually you use it with an array, so you index that, so its upper bound is high(int)
19:42:22Jehan_Gotcha.
19:42:37Jehan_I was wondering if there was a practical limit imposed by other constraints.
19:44:36Jehan_The check in closeBarrier that you say is often true is "if b.left != b.entered"?
19:45:06Araqwell it's false, b.left == b.entered, aka "all threads have finished"
19:45:21Jehan_Yeah, that's what I meant.
19:45:44*io2 quit (Quit: ...take irc away, what are you? genius, billionaire, playboy, philanthropist)
19:46:05Jehan_But the overhead that you can save here is really minuscule compared to everything else that went into the rest.
19:46:38Araqusually you can sink another statement into the 'parallel' section and then 'b.left == b.entered' becomes even more likely
19:46:41Jehan_You may save a hundred clock cycles, but there are probably thousands involved in just waking up worker threads.
19:47:31Araqwell yes. but this "wakeup" thing can be optimized too
19:48:31Jehan_In what way?
19:50:03Araqby some medium amount of busy waiting
19:50:36Jehan_That'll still be more expensive. :)
19:51:20Jehan_The problem with the efficiency of the parallel section is that cores will be sitting idle (or doing busy waiting) at the beginning when everything is being spun up and at the end when everything is being shutdown again.
19:52:15Jehan_For a parfor statement to be reasonably efficient, each job needs to do a non-negligible amount of work so that the wasted time at the beginning and end don't count for much.
19:52:43Jehan_At that point, a bit of synchronization isn't going to hurt you, either.
19:55:24Jehan_This is also why everybody is suddenly so keen on implementing tasks/futures/whatevertheycallit + workstealing (plus whatever language features are needed to make it usable). Because it beats the hell out of any other known approach to maximum CPU utilization.
20:04:11Araq*shrug* parfor is still the basis for GPU programming which beats the hell out of CPU programming; IF it can be used, of course
20:17:03Araqseriously ... inserting memory fences is sseductive. what can possibly go wrong then?
20:17:52*brson quit (Remote host closed the connection)
20:19:17*kunev quit (Quit: leaving)
20:19:19*brson joined #nimrod
20:22:38fowli couldnt find a timer lib
20:23:09fowli had to write my own, like an animal :(
20:23:33Araqthere is stuff for it in system/, fowl
20:25:07fowlAraq, my hands are bleeding from writing 20 lines tho D:
20:30:48*Matthias247 quit (Read error: Connection reset by peer)
20:42:03fowlcan you check my prs
20:42:18fowl1243 and 1174
20:42:37Araqsorry I'm busy
20:42:53fowlwhat am i paying you for?
20:43:10fowldance!
20:43:23fowlbrb
20:49:18Jehan_<grumpyoldman>I need a lawn so that I can properly tell kids to get off it.</grumpyoldman>
20:50:00flaviu1Jehan_: Why would you use XML when you could use `:`?
20:50:08AraqJehan_: gah, your suggestion of keeping the cond vars in a list is complex
20:50:43Jehan_I made a suggestion to keep cond vars in a list?
20:50:54*Jehan_ must really be getting old. Can't remember that.
20:51:02Araqin fact it's brain damaging
20:51:03Jehan_flaviu1: You lost me?
20:51:22flaviu1grumpyoldman: I need a lawn so that I can properly tell kids to get off it.
20:51:29AraqJehan_: yes, we talked about broadcasts
20:51:42flaviu1You save ~18 characters
20:52:00Jehan_Araq: Hmm, I may have been less than clear.
20:52:29Araqin fact ... with that solution I need to ensure the node is never ever added twice to the list
20:52:30Jehan_Araq: The point was not to keep cond vars in a list. Just have one. When a thread receives a signal, it processes it and signals again.
20:52:42Araqouch
20:52:49Araqthat's way easier ...
20:53:01Jehan_Sorry. :(
20:53:26Jehan_flaviu1: You win. Have a cookie. :)
20:54:31Jehan_flaviu1: http://www.wunderkessel.de/galerie/showphoto.php?photo=13445
20:54:48flaviu1Haha, looks delicious
20:55:22flaviu1Araq: Is there a reason that accent quoted things aren't just one token?
20:55:41flaviu1If there isn't, I'd like to make them one token
20:55:58Araqyeah. let that be.
20:56:28Araq`foo templParam` can be used for identifier construction
20:56:50Araqlike C's ## preprocessor operator
20:57:03flaviu1ok, thanks, I forgot about that
20:57:05Jehan_Yes, I saw that a while ago and thought that it was a very nifty way to go about it.
20:58:03*ehaliewicz joined #nimrod
21:00:13AraqJehan_: well it's not easier either way
21:00:35Jehan_Araq: Difficult to tell without the context.
21:00:56Araqthe context is: I have a fixed size array, threads append to it
21:01:12Araqwhen it's full threads have to wait until it is empty again
21:02:21Araqnow the thread that gets waked up can wake up another thread IF there is still space left in the array
21:03:15Araqbut then there is no way to tell if another thread is even waiting for that event
21:03:51Araqdoesn't matter, I guess
21:03:59Jehan_Have a counter for the number of waiting threads? But in the worst case, the signal just gets discarded.
21:04:27Jehan_Can this deadlock, by the way, if the array gets full and no thread is running to empty it?
21:05:14Araqyup, but only if the owning thread crashed and then you have other problems
21:06:16Jehan_Yeah. :)
21:09:40Araqhmm in general polling is still much easier than condition variables
21:11:34*Raynes quit (Ping timeout: 240 seconds)
21:12:02Jehan_Yes, but they're generally for different use cases.
21:12:12Jehan_Polling requires that you have a free processor.
21:12:34Jehan_And that no other threads may push the polling thread off it.
21:12:34*eximiusw1staken quit (Ping timeout: 240 seconds)
21:13:32*eximiuswastaken joined #nimrod
21:14:12*Raynes joined #nimrod
21:14:26*Raynes quit (Changing host)
21:14:26*Raynes joined #nimrod
21:14:27Araqwell i can easily sleep(10) and then sleep more -> CPU is free to run another thread -> thread pool notices not all cores are busy and creates another thread
21:15:33Araqbut I have no idea how that compares wrt efficiency to the "proper way" of using condition variables
21:16:32Jehan_Condition variables have their own issues, by the way. Such as that the POSIX standard technically does not guarantee any kind of fairness.
21:17:27Jehan_I don't know of any OS stupid enough to actually exploit that, but you never know.
21:18:36AraqI never think about fairness, it's already hard enough to create solutions that work
21:19:10Jehan_Well … fairness is part of any solution that works. :)
21:19:35AraqI knew you would say that :P
21:20:00Jehan_:)
21:20:32Araqbut again, where are all the stories about "oh, our Java program crashed and it was due to incorrect doubly checked locking"?
21:20:38flaviu1Araq: Write a macro for Java syntax, then copy-paste code. The java guys have really nice concurrency.
21:20:52flaviu1Libraries
21:21:52Araqit's like the very common overflow bug in binary search algorithms
21:24:13Araqflaviu1: where is the fun in that?
21:24:28Jehan_Araq: Most Java programs don't particularly fine-tune their concurrency.
21:24:42Jehan_They just use basic monitor semantics for the most part.
21:24:57Jehan_The JVM implementors do pull a few tricks, on the other hand.
21:25:16Jehan_Google "biased locking", for example.
21:25:16AraqI've seen lots of doubly checked locking in C#
21:25:55Jehan_Araq: It doesn't blow up in C# because C# pretty much runs on x86 only and double-checked locking is perfectly safe on a TSO architecture.
21:26:19*Mat3 joined #nimrod
21:26:22Mat3hi all
21:26:34Jehan_If you're only interested in x86, then you can almost make as many assumptions as though you were using Pth. :)
21:26:42Araqsure but my point is: before it fails because of that, it fails earlier because of other threading bugs
21:26:55Araqor because of OOM
21:27:15Araqvery common in heavily GC'ed languages in fact
21:27:17Jehan_Possibly. Someone who can't get double-checked locking right will probably screw up a lot more than that.
21:27:43Jehan_And you can do double-checked locking correctly if you know how to use memory barriers.
21:28:17Jehan_I've written a hell of a lot of code that avoids locks and condition variables, and I still fall back to them whenever I can.
21:28:31Jehan_Simply because the headaches of doing it correctly without are rarely worth it.
21:28:44Jehan_And most of the time, you don't gain any speed, anyway.
21:28:47Mat3hmm, interesting
21:28:57Jehan_Double-checked locking, ironically, is one of the few examples where it's actually worth it.
21:29:25Jehan_But in the end, the overhead of synchronization is primarily because any synchronization primitive has to bypass caches and write to main memory.
21:29:48Jehan_But that's also true for anything where one thread needs to communicate with a thread on another core.
21:30:02Jehan_I.e. you need to push the data to main memory, anyway, and pay the price.
21:30:34Jehan_Double-checked locking is the odd exception because once you're past initialization, it's essentially constant on all cores and doesn't get modified any further.
21:31:10Araqmost recent intel CPUs support direct inter CPU communication bypassing the main memory, i think
21:31:12Jehan_Memory barriers can also be pretty expensive.
21:31:50Jehan_Memory barriers are basically used to exploit the fact that most are no-ops on x86 and still guarantee correctness on other processors.
21:32:23Jehan_Araq: In a way, but that's also not cheap.
21:32:53Jehan_And where they can, they can also make locks cheap in the same fashion.
21:34:41Jehan_The basic operation underlying a lock is basically a read-modify-write that's guaranteed to be atomic and seen by all processors.
21:35:12VarriountMeep
21:35:39Varriountdom96: I read the logs at work. Looks like the asyncio api is going to need some changes in order to make it threadable.
21:39:03Jehan_Varriount: The one thing I'd like to have for this and similar features are multiple GCed shared heaps.
21:39:10*askatasuna quit (Ping timeout: 276 seconds)
21:39:20Jehan_Basically, thread-local heaps without actual threads controlling them.
21:39:51dom96Varriount: Perhaps. But I have no idea what those changes are.
21:40:02Araqthat's quite easy to expose, Jehan_ but also incredibly dangerous
21:40:16Mat3Araq: CoMA systems should do that I think
21:40:16Jehan_Araq: Yeah, I know, I did look at it.
21:40:22VarriountJehan_: Do you think simply emulating the windows io notification queue could work for multithreaded IO?
21:40:30Jehan_And I agree with the dangerous part.
21:40:44Jehan_Varriount: I don't know much about Windows.
21:40:59VarriountJehan_: Do you know what a proactor is?
21:41:18Jehan_Varriount: Yup.
21:41:33VarriountJehan_: It's that.
21:42:00Jehan_Varriount: Still doesn't tell me enough.
21:42:31Jehan_It does depend on the OS kernel what's efficient and what isn't.
21:42:38VarriountJehan_: On windows, you can make a call which returns immediately, then poll a notification queue for a completion event.
21:42:38*hoverbea_ joined #nimrod
21:42:48Varriount*an IO call
21:43:00Jehan_Varriount: Yeah.
21:43:18Jehan_The question is, is it efficient?
21:43:30VarriountJehan_: In this case, yes.
21:43:51Jehan_Oh, wait, you want to emulate it for asyncio?
21:43:56Jehan_Rather than using it?
21:44:04Jehan_I think I misread your original question.
21:44:09VarriountJehan_: If multiple threads wait on the same queue, the OS actively selects a handful of threads to wake up and send notifications to.
21:44:58VarriountJehan_: I'm wondering if that sort of model could be used with Nimrod's threading model to implement efficient asyncio
21:45:58*hoverbear quit (Ping timeout: 240 seconds)
21:46:07AraqVarriount: IMO that's the wrong question to ask. If you're IO bound, you're not CPU bound by definition
21:46:46Jehan_Varriount: I don't think there's an easy answer. As far as I know, libevent is a non-trivial piece of code.
21:46:54VarriountAraq: Then what's the right question?
21:47:53VarriountIt's also a matter of avoiding the 'thundering herd problem' with threading and IO
21:47:57Jehan_I'd hook into libevent rather than reinventing it if I wanted to do fast multi-threaded I/O.
21:48:19AraqVarriount: why is Varriount still not happy with the existing async IO?
21:48:56VarriountBecause Varriount wants scalable IO, like it says on the tin. *looks at dom96*
21:49:20AraqJehan_: I've heard that a lot. we created our own for lots of reasons
21:49:38VarriountIf you have multiple threads that all use a posix-like poll() on a single socket, you get a bunch of thread contention as they fight for resources.
21:50:11VarriountIf you use only a single thread, you are limited in how well you can scale.
21:50:11Jehan_Araq: There are plenty good reasons to create your own (such as not having a gazillion external dependencies).
21:51:03VarriountAraq: See http://msdn.microsoft.com/en-us/library/windows/desktop/aa365198(v=vs.85).aspx
21:51:14AraqVarriount: only MongoDB and node.js are webscale anyway
21:51:25Araqand /dev/nul of course
21:51:40Varriount-_-
21:51:43Jehan_Varriount: If you want scalable rather than "maximum speed attainable", then the question becomes easier.
21:51:43flaviu1Araq: Only if its over the internet, as a service
21:52:02VarriountAraq: And neither of those are written in Nimrod, I notice.
21:52:04EXetoCthat never gets old
21:52:45flaviu1EXetoC: http://devnull-as-a-service.com/
21:52:50Mat3does Nimrod support actors ?
21:53:03flaviu1Yes, apparently they're slow though
21:53:05*Changaco quit (Ping timeout: 264 seconds)
21:55:18Mat3good to know, thanks
21:55:42*askatasuna joined #nimrod
21:56:57dom96Araq: Async gets rid of the IO bottleneck so CPU only remains.
21:57:42dom96Most benchmarks test how fast you can accept connections and serve their request, and this becomes CPU bound quickly.
21:58:04dom96Because you always have a client connecting to your server socket.
22:03:07Araqhow the times change. I thought web apps always wait for a database query or a connection. Nowadays they are all CPU bound...
22:03:43Jehan_Well, we're talking about benchmarks. :)
22:03:54Jehan_Which may or may not have a connection to actual practice. :)
22:04:13Jehan_According to most benchmarks, Ruby on Rails is completely unusable.
22:05:34Araqdom96: can you please tell me why using 4-8 processes on a 4 core CPU is not as efficient for these benchmarks?
22:05:35dom96Yeah, you're right. It's silly to optimise for these sorts of benchmarks. But then what do we optimise for?
22:05:57dom96Being faster than Go differentiates us in some way at least.
22:06:09dom96But I guess I can forget about achieving that.
22:06:48dom96Araq: Like I said multiple times already, I cannot spawn the processes dynamically for the framework benchmarks.
22:07:02dom96I have to hard code a number in.
22:07:16*filwit joined #nimrod
22:07:18Araqeh ... what?
22:07:27filwithey Jehan_, you around?
22:07:50filwiti read logs, and saw your response to my earlier gist
22:07:59Varriountdom96: Could you explain, if not for Araq, then for my sake?
22:08:17dom96Varriount: Araq: Say I have 8 processes, each needs to listen on a different scgi port. Nginx needs these ports to be hard coded into its config file.
22:08:40Jehan_dom96: I wasn't criticizing you for optimizing for benchmarks. It's silly that the game is played this way, but there's little you can do about it.
22:08:45dom96Now if the CPU has only 4 cores and I spawn 4 processes then the config file needs to be edited.
22:09:23Jehan_dom96: I am still amused by the "benchmarks über alles" attitude that ignores a lot of other important concerns.
22:09:44Jehan_filwit: Yeah, intermittently afk, but I'm here.
22:09:55Araqdom96: we can hack around that limitation easily
22:09:57dom96Right now because I have rewritten the http server we can stop using nginx.
22:10:03dom96And we should do that
22:10:17dom96But that prevents your approach too
22:10:38dom96Unless I write a load balancer
22:11:00Araqin any case, sending an incoming socket to a thread is hardly a problem, even with nimrod's thread local GCs
22:11:09dom96which will do what nginx does with scgi...
22:11:28dom96This approach is simply not elegant.
22:11:48filwitJehan_: oh good, you're here. The problem I see with your gist, is that it's (as i was expecting) fundamentally different than mine in that I use a reference of an interface type to point to "higher order" behavior dynamically. I guess "design differently" does apply to most cases, true.
22:11:50dom96If you want to sell "Yeah, we support scaling. Just spawn multiple processes" then by all means go ahead.
22:12:00dom96But I am not convinced that people will buy that.
22:12:43Jehan_filwit: Yes. The point I was trying to get at is that dynamic dispatch is unneeded most of the time.
22:13:10VarriountIt all boils down to a problem of architecture: How do we present a usable, scalable, and asynchronous approach to IO?
22:13:25filwitJehan_: I don't think i illustrated my point as well as I should have by using single pointers (a & v in my gist) to illustrate instead of `seq[IAction]`, etc
22:13:36flaviu1Varriount: Pick 2 of 3?
22:13:56filwitJehan_: i understand that often dynamic dispatch can be avoided, and should be when it's possible
22:14:00Jehan_filwit: And that it's extremely rare to need to dispatch based on two different hierarchies.
22:14:24*ehaliewicz quit (Read error: Connection reset by peer)
22:14:35Jehan_filwit: My claim was never that MI is useless, but that it's needed very rarely and obviated mostly by other features that Nimrod has.
22:15:12Mat3Varriount: Actors ?
22:16:14filwitJehan_: well, that's kinda why i was asking you for and example of a design pattern that would match. For instance, If imagine you have an GUI system. Button and Slider are both `GUI` objects, and can be added to the same seq[GUI] list, however, they might also need to be derived of different things (VIsual for example, where Visual adds functionality, not just enforces protocol).
22:16:24*ehaliewicz joined #nimrod
22:16:34filwitJehan_: in these situation, is there a better approach than MI?
22:17:35Araqah here we go, UIs need OO
22:17:35Jehan_filwit: heterogeneous containers (such as ASTs or seq[AbstractType]) are the classical example of where you need dynamic dispatch, but once you get to looking at examples where you actually need them, they crop up pretty rarely.
22:18:02*hoverbea_ quit ()
22:18:03filwitJehan_: I was thinking that even in my `trait` design, I would attempt to lower to this soft of generic (non-dynamic) behavior if possible (through use of 'prov' vs 'method'), but I can't imagine a better (more straight forward) approach to tackle dynamic dispatch than MI (except procvar lists, but those have their own issues)
22:18:56fowlcomponents will save you from your oo nightmares
22:19:34Jehan_filwit: The question is whether you even want/need dynamic dispatch most of the time.
22:19:42fowlin fact, you use modules, your code is component oriented, you just have to apply that to your widgets/objects
22:19:44filwitfowl, yes, but they also require the programmer to write maintenance code... which is sorta what my Trait thing is, actually
22:19:59fowl:rolleyes:
22:20:36Varriountfowl: This isn't github - Most clients don't have that kind of smiley parsing.
22:20:50Jehan_filwit: As a workaround, you can always use views + Nimrod methods to put objects in more than one type hierarchy.
22:21:10fowl:poop in a bucket:
22:21:17filwitJahan_: yes, but I gave a specific example of where dynamic dispatch is useful (seq[GUI]), and I was wondering what better design pattern could accomplish it (short of a component, procvar list, manager)
22:21:40Jehan_As I said, ideally you do want MI. It's just that I don't think its a killer feature as long as you have parametric polymorphism and some sort of variant types or single inheritance.
22:22:04filwitfowl, i'm not saying it's super hard or something that engine designer's shouldn't do... only that it's not a great solution for those looking to quickly hack together functional code.
22:22:16*gsingh93_ joined #nimrod
22:22:26Araqfilwit: not saying that it is better solution, but keeping things separate works too
22:22:30Jehan_filwit: You don't need to convince me that it's desirable (I've written too much Eiffel code to think otherwise). Just that it's not all THAT valuable.
22:22:50Araqso you don't even have the seq[GUI] in the first place
22:22:58filwitJehan_: okay. That sounds reasonable. I'm just wanted to know if you had a better solution I wasn't aware of for this. Thanks for the clarification.
22:23:52fowlif wordcount was nickels you would be BALLIN filwit
22:24:09fowli cant argue with paragraphs
22:24:22filwitAraq: yes, I agree for many things this is a better design patter (see fowl's comments about component systems), but it requires design on top of the structure's your trying to write (so bad for quickly hacking things together... at least for some).
22:24:36Jehan_filwit: You can also use views to emulate multiple inheritance. They have the benefit that they work after the fact, too.
22:25:04filwitfowl: you can quote sections of my statements and respond accordingly
22:25:13Araqfilwit: I'd go for the "immediate mode UI" anyway
22:25:19fowlgui isnt hard, i write a specialized version for every project i do
22:25:27*Varriount quit (Quit: Leaving)
22:25:49Jehan_I.e. basically the adapter pattern integrated with the type system.
22:25:50fowldisplay some info, handle clicks, anything else can be added later
22:27:25filwitAraq, fowl: this also stems beyond GUI (in fact I probably wouldn't use MI for a GUI anyways...). However, it's still a commonly understood design pattern that many are familiar with that is pretty much the "cheapest" (in terms of code required) way to construct sane software.
22:27:26EXetoCAraq: you weren't convinced by the counter arguments provided in this channel before?
22:27:37EXetoCI don't know nothing about GUI approaches btw
22:28:19filwitAraq, fowl: at the end of the day, i agree with Jehan_ of course, that most of the time (with generic code) MI is not needed really.
22:28:26Jehan_filwit: I think what you're seeing here is a difference in programming paradigm.
22:28:50AraqEXetoC: I've used both. For games I'd always use the immediate mode UI. it's a pleasure to debug.
22:29:19Jehan_An OCaml programmer, for example, would likely be mystified by the desire for multiple inheritance (and even single inheritance for the most part).
22:29:29*Varriount joined #nimrod
22:29:38*brson quit (Ping timeout: 255 seconds)
22:30:31Jehan_OCaml does have objects and classes, but they see very little use.
22:30:59filwitJehan_: my intentions with most of the "points" I raise here is about Nimrod's adoption, and my ability to advertise it. OCaml devs may have fine "work arounds" but that language is far from as popular as Java/C++/C# in terms of popularity, and all of those support a common design pattern Nimrod does not.
22:31:15Jehan_filwit: I understand that.
22:31:54Jehan_filwit: But Nimrod is in many respects different from Java/C++/C#, often intentionally so. I'm not sure how much you can hide that.
22:31:56flaviu1I like Scala's multiple inheritance
22:32:56Jehan_flaviu1: The only real issue with MI is to implement it efficiently.
22:33:18flaviu1Yeah, scala's version had a interface call for each trait you stacked on
22:33:19Jehan_Lots of people got scared of it because C++ managed to screw it up.
22:33:24filwitJehan_: actually, that's what makes Nimrod so great is that, yes, it's core solutions are different (often more "ground level") than the big-3, but it's meta-programming allows for us to support all of their paradigms as well, with very similar syntax.
22:34:10*saml_ joined #nimrod
22:34:11Jehan_filwit: I'm not sure that implementing a different language on top of Nimrod is the best idea. It reminds me of many of the problems LISP always had when that happened. :)
22:34:43Araqhi saml_ welcome
22:34:51filwitJehan_: well it's not so much "different language" but "look, you want OOP? Just `import oop`"
22:35:03saml_in Araq . i've been waiting for you
22:35:04Jehan_Part of the problem with understanding LISP code is that you often have to understand several different sublanguages, few of them documented.
22:35:19Jehan_filwit: Yeah, but new syntax, new programming model and conventions ...
22:35:45fowlits not new syntax
22:36:31fowlnvm im not paying attention tow hat yall are saying
22:36:34Jehan_Having methods with an implicit this parameter qualifies, I think.
22:36:36filwitJehan_: yeah, I would agree somewhat (even though I think standards *should* be up to third parties as much as possible), but in this case OOP is such a commonly understood paradigm it makes sense.
22:36:38dom96good night
22:36:58Mat3Jehan_: Can you please be more problems, which problems you see with ML on top of Lisp ?
22:37:14Mat3^problems=precise, sorry
22:37:25Jehan_Mat3: Huh? I wasn't talking about ML on top of LISP.
22:37:37Mat3ML = Meta Language
22:38:16Mat3or application specific language
22:38:16Jehan_Mat3: The problem is that these programs are a pain to understand and maintain.
22:38:34filwitbrb
22:38:37Jehan_For relatively little benefit in general.
22:38:49Mat3doesn't this depend on the language implemented ?
22:39:27Mat3I mean one can implement all kind of languages on lisp (even Scheme)
22:39:29Mat3^on=in
22:39:31Jehan_Mat3: Yes, but try dealing with half a dozen slightly different LISP dialects in the same program.
22:39:54flaviu1Basically why java is so successful
22:40:16flaviu1The cost of doing clever stuff like that is so high no one does it
22:40:19Jehan_There's a famous bonmot that every language eventually implements a LISP engine.
22:40:20AraqI've heard that a lot. IME the average Java program is so far worse it's not funny
22:40:24Mat3just study there declaration, I really see no problem in doing that
22:40:29Jehan_A little known corollary is that this applies to LISP, too. :)
22:41:04Jehan_Mat3: Once you're dealing with programs north of a few hundred KLOC, it's not that easy anymore.
22:41:33Araqflaviu1: most Java code I've seen is unmaintainable.
22:41:51flaviu1Araq: I've had much better experiences
22:42:22flaviu1I haven't done anything with heavy reflection though
22:42:24Jehan_Araq: The problem with Java is the opposite, i.e. that the language was so feature poor initially that all kinds of workarounds via design patterns became common practice.
22:43:02flaviu1Enums as singletons, haha
22:43:10Jehan_Overuse of design patterns for trivial stuff was a general fad, too.
22:43:10Araqif you have more than a few hundred KLOC, you've done it wrong, Jehan_ ;-)
22:43:31Jehan_Araq: I wish. Some programs simply are that big.
22:43:55flaviu1And then Scala came around, and programs in scala were half the size of Java, but they sometimes overused operators.
22:44:03Jehan_One of my least favorite jobs involved a C++ codebase with 1.5 million LOC (in the early/mid-aughts).
22:44:08flaviu1Also, allocated like crazy and were sometimes very slow
22:44:55*hoverbear joined #nimrod
22:45:02Jehan_While that codebase had its problems, there really wasn't much you could have done to slim it down.
22:45:49flaviu1Araq: It seems that I can't have `#` in my backticks, but I guess thats a small price to pay for the added implementation ease
22:46:59Mat3Jehan_: That argument applies to every language I think, anyhow I see your point (and only think a good solution for it is simply structured programming in general)
22:48:03Jehan_Mat3: I wasn't arguing against languages. I was arguing against the temptation to build new and shiny DSLs on top of languages just because they allow it.
22:48:37Jehan_With great power comes great responsibility and all that. :)
22:48:55Jehan_I *like* metaprogramming capabilities.
22:49:16Jehan_But they can also be dangerous for software maintenance if people go overboard with them.
22:49:26OrionPKhola
22:49:44Araqgood DSL design simply has to be learned like good API design
22:49:47flaviu1Jehan_: I agree with you, but how can programming languages allow flexibility while making going overboard on the flexibility difficult?
22:50:10AraqI can't see yet that it's inherently harder to do than good API design
22:50:12Jehan_flaviu1: They can't. I was preaching self-restraint.
22:50:19Varriountflaviu1: By using social taboos
22:50:45Jehan_Araq: You haven't seen what custom defmacro variants can do. :)
22:50:45flaviu1That works, but a special few will feel they are too special to abide by the taboos
22:50:46VarriountEg, shun those who misuse powerful constructs
22:50:57Araqand nobody argues we shouldn't use APIs in large software systems
22:51:03*hoverbea_ joined #nimrod
22:51:36flaviu1Varriount: In free time programming, you have that choice, not always
22:51:43Araqpeople go overboard with it *because* they couldn't do it all before
22:52:12Jehan_Araq: Eh, the seasoned LISP programmers are often the worst. :)
22:53:10Mat3Jehan_: That's the philosophy behind Lisp (this and factoring - a strategy for for avoiding the problems you mentioned)
22:54:15*hoverbear quit (Ping timeout: 260 seconds)
22:54:24Mat3Lisp programs are often applicative languages which implicitly solve a range of related problems
22:55:14Mat3(or generators for solving them)
22:57:42Mat3like different mathematical notations are useful for abstraction purposes
22:59:29Mat3get some sleep, ciao
22:59:33*Mat3 quit (Quit: Verlassend)
23:01:04Jehan_Sleep sounds like a good idea. See you around. :)
23:01:08*Jehan_ quit (Quit: Leaving)
23:01:57Araqsame here, bye
23:02:05VarriountGoodnight
23:05:26dom96hrm, looks like we got mentioned on r/programmingcirclejerk
23:05:39Varriount?
23:05:59VarriountI'm assuming, due to the subreddit, that it wasn't anything good?
23:06:04dom96not really
23:06:08dom96but meh
23:06:18dom96even bad PR is good PR :P
23:06:20dom96good night for reals
23:06:45flaviu1But they're making fun of wikipedia more than nimrod
23:08:29flaviu1[On swift] "I liked it before all the plebs started talking about it. Now I'm considering switching to nimrod. "
23:09:41VarriountOuch.
23:10:09Varriountflaviu1: I just hide anything that mentions swift in my reddit client (I browse reddit mainly through an android app)
23:11:47flaviu1Varriount: Great idea, RES does that too
23:13:40Varriountdom96: I'm experiencing closure-ception. I have a closure which calls a procedure with itself to schedule itself somewhere else.
23:19:08flaviu1dom96: You should see how much they're bashing on Go
23:20:28*xtagon quit (Excess Flood)
23:21:41*xtagon joined #nimrod
23:28:30*hoverbea_ quit ()
23:35:15filwitback, read the reddit thing "You definately should. It has whole program dead code elimination. How anyone could code before this is beyond me."
23:35:41filwitam i missing something here?
23:36:01filwiti thought the jokes would where supposed to be clever
23:37:10filwitbut picking one of the few more 'minor' benefits of Nimrod and acting like that's it isn't clever at all... maybe i'm missing the point of this subreddit tho
23:38:00VarriountI guess the humor is in pointing out a seemingly minor or worthless feature that nonetheless is touted as a clever feature?
23:38:03filwitoh damn, Araq left before i could question him about his GC design..
23:39:09filwitVarriount: i guess.. though it's not even a hugely brought up thing
23:39:16*xenagi joined #nimrod
23:39:41filwitguess it's listed on the front page, so some people (who are looking for things to complain about) would take it wrong?
23:41:05filwiteither way, pretending Nimrod *doesn't* have excellent performance and an excellent GC (and assuming that's not relevant) is just a silly joke.
23:41:43filwitIf he wanted to make a better one, he should have mentioned the need for forward-declaration or something, idk..
23:41:56VarriountI mean, if I were to point out nimrod's biggest flaw, it would either be a fact we couldn't directly control (like public mindshare) or some architectural choice (like unconventional threading model)
23:42:12filwiti would respond but I feel like I would just come off as "that guy" in the wrong subreddit..
23:42:23VarriountEven then, it's easier to poke fun at Java
23:42:29filwityeah
23:43:20filwitmeh, at least Nimrod is big enough to where 8 people on some random subreddit knew it enough to get the joke.
23:43:43VarriountEven if it is a rather humorless one.
23:44:09Varriount(By the way, can humor also be spelled "humour"?)
23:45:29filwit(dunno, I only speak americanrish)
23:49:18*Demos joined #nimrod
23:49:30*darkf joined #nimrod