00:00:47 | * | xenagi joined #nimrod |
00:02:48 | * | Kazimuth quit (Remote host closed the connection) |
00:05:18 | * | filwit quit (Ping timeout: 245 seconds) |
00:10:03 | * | caioariede quit (Ping timeout: 265 seconds) |
00:18:09 | * | filwit joined #nimrod |
00:18:58 | * | q66 quit (Quit: Leaving) |
00:40:38 | filwit | Jehan_: hey, you around? Wanted to talk about OOP stuff now. Concerning multi-interfaces vs parametric polymorphism, I wrote a simple example of OOP Nimrod code (using some pseudo macros I'm currently designing) which is very trivial. gist: https://gist.github.com/PhilipWitte/b1316b22d69b47bcf593 |
00:40:47 | filwit | Jehan_: It's just one class, two interfaces (traits). Note that I'm using 'method' to denote dynamic dispatch, but these procedures will actually convert to custom dispatch trees (not Nimrod's built-in method dispatch, which only supports single inheritance). My question to you, is what design practice you consider better practice than this, and why? |
00:40:54 | filwit | Jehan_: I could pass `p` to a generic proc which duck-types it against IAction/IVisual and returns a structure with (which is, I assume, akin to what you mean by 'parametric polymorphism')... but that's (one way to achieve) pretty exactly what is going on here. Except that I'm building it into the trait object which has the benefits of enforcing behavior at class declaration, isolating potential references the class can be used from (helpful for sanity on larg |
00:40:55 | filwit | e codebases with many devs), and potentially optimizing the code (because dispatch trees are theoretically inlineable and use less memory than procvar lists). |
00:41:46 | filwit | Jehan_: so i'm interested to hear your thoughts on better design approaches to this simple example. |
00:41:58 | flaviu1 | filwit: He's sleeping |
00:42:46 | filwit | crap, just realized he's not even here... |
00:42:55 | filwit | i wrote all that out for nothing, lol |
00:42:58 | flaviu1 | Maybe he'll check the logs |
00:43:04 | filwit | yeah |
00:43:08 | flaviu1 | Just gist it and send him a link when he appears |
00:43:18 | filwit | yeah, good idea |
00:44:18 | * | caioariede joined #nimrod |
00:47:36 | Varriount | filwit: You could use memoserve |
00:48:01 | filwit | dunno how to use that really |
00:48:26 | filwit | but it's fine, i added the comment to the gist, and will just ping him next i'm on |
00:49:55 | filwit | the github Nimrod color highlighter really need to be updated a bit. It's odd how proc names are highlighted red, but not method/template/iterator/converter names |
00:50:29 | filwit | plus i think some keywords, like 'using', are missing (which makes sense since it's new |
01:02:05 | * | brson quit (Ping timeout: 264 seconds) |
01:10:28 | * | caioariede quit (Ping timeout: 265 seconds) |
01:17:02 | flaviu1 | filwit: Actually, its github's fault things aren't updated |
01:17:22 | filwit | flaviu1: ? |
01:17:32 | filwit | oh, nevermind |
01:17:37 | filwit | forgot what i posted earlier |
01:17:54 | flaviu1 | Wait, NM |
01:18:13 | flaviu1 | I thought that they used pygments.rb, which hasn't been updated in a while |
01:18:24 | filwit | i imagine we could put in a request to update the style stuff |
01:18:36 | filwit | idk how these things work with github though |
01:19:10 | flaviu1 | filwit: Well, you'd send a PR to pygments, and github might eventually update their pygments version number |
01:19:35 | flaviu1 | Might get it done faster if you send an email to support after your PR gets accepted |
01:19:47 | filwit | ah, okay. makes sense |
01:20:47 | flaviu1 | https://bitbucket.org/birkenfeld/pygments-main seems like the project is active and accepting PRs |
01:22:35 | filwit | recent activity on that: Swift support... people work fast, lol |
01:22:46 | flaviu1 | Just a bug report, not a PR there |
01:22:56 | filwit | ah, okay |
01:23:04 | filwit | does bitbutcket style Nim code correctly? |
01:23:35 | flaviu1 | Yes, it also uses pygments |
01:23:50 | filwit | also, i noticed you changed away from Gitbook on your NimByExample |
01:23:52 | filwit | :( |
01:24:02 | filwit | but what color coding are you using for Nimrod now? |
01:24:09 | flaviu1 | pygments |
01:24:17 | filwit | k |
01:24:23 | * | nande joined #nimrod |
01:24:35 | flaviu1 | If there's anything you miss from gitbook, tell me and I'll see if I can fix it |
01:25:07 | filwit | i just thought it in general looked very clean, and i liked the little check-marks next to chapters i've read |
01:25:26 | filwit | but those aren't really work implementing on your own i'm sure. |
01:25:52 | filwit | your new one doesn't look bad. I just prefer the old one more |
01:26:23 | flaviu1 | It might be, I haven't done much javascript and probably should do more |
01:26:38 | filwit | but i'm sure you have your reasons for switching, and ultimately having the site up and complete is worth a lot more than fancy CSS |
01:27:46 | flaviu1 | My reasons for switching was to avoid a node.js dependency and because gitbook almost made me loose my git repository |
01:28:07 | filwit | how did it manage that? |
01:28:22 | filwit | i've never used it before |
01:28:37 | flaviu1 | You specify an output directory, and when it wants to output the files, it just rm's that directory |
01:28:50 | filwit | nice |
01:28:56 | filwit | lol |
01:29:05 | flaviu1 | lol, yep |
01:30:03 | * | caioariede joined #nimrod |
01:30:44 | Varriount | Gah. How do I have a closure pass itself to another procedure without passing a pointer to itself as an argument? |
01:31:06 | Varriount | It's like trying to open a crate with the crowbar inside the crate |
01:33:07 | filwit | use the gravity gun, better than crowbar |
01:33:27 | filwit | (ie, don't know the correct answer to your question, sorry) |
01:37:28 | flaviu1 | Varriount: Best I can do is evil pointer magic |
01:37:53 | flaviu1 | But wait, that might not work |
01:38:14 | Varriount | Ah, got it. Create an empty reference to the closure before the closure definition, and fill in the reference afterwords. |
01:40:44 | Varriount | Nimrod has made me greatly appreciate closures. |
01:46:03 | OrionPK | fowl I added that support for namespaces that I discussed with you previously |
01:46:40 | OrionPK | just added in support for "ns" in your opts argument to class |
01:47:03 | flaviu1 | filwit: That should be possible without any javascript! the little check is actually a green |
01:48:11 | filwit | flaviu1: i was actually thinking it was a server-side thing, but i suppose it would be easy enough with JS as well. |
01:48:18 | fowl | OrionPK, how does it look? ns: sf ? |
01:48:31 | flaviu1 | Well, I'm going to try to do it with just html and css |
01:49:25 | filwit | err.. yeah.. what am i thinking. Just use a :visited CSS specializer on the link.. dur |
01:52:39 | * | nande quit (Remote host closed the connection) |
02:15:39 | flaviu1 | It turns out you can't be too clever with :visted links for privacy reasons or something like that, so you have to use cookies |
02:18:02 | flaviu1 | I wonder how gitbook does it, no cookies |
02:20:26 | filwit | i'll inspect rustbyexample and see if they're using :visited |
02:20:38 | filwit | not sure what you're saying about :visited not working.. that should work fine |
02:22:25 | filwit | doesn't look like it at first glance |
02:22:44 | flaviu1 | https://hacks.mozilla.org/2010/03/privacy-related-changes-coming-to-css-vistited/ |
02:23:56 | * | Joe_knock quit (Quit: Leaving) |
02:24:34 | filwit | thanks, was not aware of this change |
02:25:40 | flaviu1 | May 2010, pretty old. Chrome is the same. Apparently only Opera allows it |
02:26:20 | filwit | yeah haven't actually used :visted in awhile -__- |
02:26:57 | filwit | well Gitbook *could* be just storing sessions stuff per IP i guess |
02:27:04 | filwit | but that doesn't really sound right |
02:27:13 | filwit | you sure it doesn't use cookies? |
02:27:24 | flaviu1 | I checked my cookies, I didn't see anything |
02:27:42 | filwit | there was some new JS store() thing awhile ago, but i lost all major interest in JS a couple of years ago |
02:27:48 | filwit | k |
02:28:25 | flaviu1 | Yeah, its store |
02:41:15 | fowl | lol |
02:41:24 | fowl | \:: can be used as an operator |
02:41:37 | fowl | im going to do it, to anger flaviu1 :> |
02:42:28 | flaviu1 | fowl: Fold right? |
02:43:22 | * | saml_ quit (Quit: Leaving) |
02:44:47 | * | kemet joined #nimrod |
02:46:10 | fowl | how would that look |
02:47:01 | * | kemet quit (Client Quit) |
02:48:21 | flaviu1 | :\ |
02:50:15 | flaviu1 | `:\` is the scala operator for foldRight |
02:51:54 | flaviu1 | Hmph. Firefox's developer console uses the same APIs as javascript, so its actually wrong about visible link styling |
02:57:34 | fowl | flaviu1, scala chooses very weird operators |
02:59:18 | flaviu1 | No, it makes sense. You start with the first element and the accumulator, apply your function. the items you've applied line up like a triangle: 1 = 1 + 2; 2 = 1 + 2 + 3; 9 = 1+2+3+4+5+6+7+8+9; |
02:59:24 | flaviu1 | See the shape? :P |
03:01:15 | fowl | thanks for explaining a left fold to me |
03:01:48 | fowl | i dont see :\ in this article though |
03:02:28 | flaviu1 | I never really figured out the difference between a left and right fold anyway |
03:02:54 | fowl | the diff is ((1*2)*3) vs (1*(2*3)) |
03:04:24 | fowl | nimrods leftfold is diff: foldl(someseq, a + b): seq |
03:04:49 | fowl | well, its a template |
03:05:45 | * | caioariede quit (Ping timeout: 252 seconds) |
03:09:09 | filwit | it's really annoying for OOP stuff to not be able to forward-declare types :\ |
03:12:07 | filwit | class Key: proc new(...) = allKeys.add(this) # `allKeys` is a `seq[Key]` which needs to know what a `Key` is.. |
03:12:20 | * | caioariede joined #nimrod |
03:12:53 | * | def-_ joined #nimrod |
03:12:55 | filwit | so i have to make a special {.asis.} pragma for vars/procs in a class body just so i can define this global after the type is defined |
03:14:13 | filwit | would be nice if i could just put `type Key = ref object {.undefined.}` or something above, and alleviate the need for the special prag condition inside the macro |
03:14:51 | filwit | there isn't such a thing in Nimrod that already exist is there? Now that I mention it, I think I saw something like this before... |
03:15:50 | flaviu1 | filwit: Sorry, I can't find a red color for already visited pages that doesn't look like vomit |
03:16:17 | * | def- quit (Ping timeout: 252 seconds) |
03:16:18 | filwit | flaviu1: just use a checkmark icon like gitbook does |
03:21:23 | * | hoverbear joined #nimrod |
03:26:14 | OrionPK | fowl you can look at it in the tests |
03:26:25 | OrionPK | fowl class(test, ns: pp, header: "../test.hpp") |
03:26:51 | filwit | ah ha! found it: http://build.nimrod-lang.org/docs/nimrodc.html#incompletestruct-pragma |
03:26:55 | OrionPK | class(Window, ns: sf, header: sfml_h): |
03:26:59 | filwit | not sure that will work tho |
03:27:44 | fowl | OrionPK, i saw |
03:30:18 | fowl | OrionPK, i wrote a function so you can use pp.Test |
03:31:58 | * | caioariede quit (Ping timeout: 245 seconds) |
03:32:08 | OrionPK | fowl pp.Test? |
03:32:16 | fowl | https://gist.github.com/fowlmouth/4ef59af751acf825ec01 |
03:32:38 | OrionPK | I just aliased "ns" and "namespace" so you can use either one now |
03:33:19 | fowl | OrionPK, as in class(pp.test) for test as "pp::test" |
03:34:07 | OrionPK | ah cool |
03:34:24 | fowl | OrionPK, can i merge in some other options like inheritable, parent, to make it a general-case oop macro |
03:34:29 | OrionPK | sure |
03:34:53 | OrionPK | maybe we should start a branch with that stuff even |
03:35:43 | OrionPK | we should clean up and split out the option handler as well |
03:35:56 | OrionPK | so it isn't doing all that case logic in the class macro |
03:36:06 | fowl | ok |
03:50:23 | filwit | flaviu1: you sounded like you knew a bit about the GC, and since Araq isn't awake, do you know if the GC ref-counts each ref assignment? Or does the 'deferred' part mean the ref-counting happens only during a scan? |
03:52:00 | flaviu1 | filwit: I don't know much about the GC in nimrod, just general ideas. I don't know how nimrod does it |
03:52:21 | filwit | flaviu1: ah, okay. I'll ask when he gets up then. |
04:05:32 | * | xenagi quit (Quit: Leaving) |
04:15:58 | * | filwit quit (Quit: Leaving) |
04:56:39 | * | kemet joined #nimrod |
04:59:16 | * | kemet quit (Client Quit) |
05:21:11 | * | nanda` joined #nimrod |
05:39:51 | * | xtagon quit (Quit: Leaving) |
05:46:37 | * | kemet joined #nimrod |
05:47:06 | * | kemet quit (Client Quit) |
05:59:11 | * | bjz joined #nimrod |
05:59:16 | * | Demos quit (Read error: Connection reset by peer) |
06:22:40 | * | bjz quit (Ping timeout: 276 seconds) |
06:23:03 | * | bjz joined #nimrod |
06:26:13 | * | brson joined #nimrod |
06:27:17 | * | brson quit (Client Quit) |
06:27:36 | * | brson joined #nimrod |
06:32:14 | Klaufir | how can I print the type of an expression? |
06:33:49 | * | hoverbear quit () |
06:38:27 | * | brson quit (Ping timeout: 260 seconds) |
06:38:58 | * | brson joined #nimrod |
06:47:13 | fowl | Klaufir, with typetraits you can do name(type(exprssion)) |
06:49:29 | * | bjz quit (Ping timeout: 264 seconds) |
07:00:34 | Klaufir | fowl: thanks |
07:05:15 | fowl | np |
07:17:56 | * | darkf quit (Read error: Connection reset by peer) |
07:18:46 | * | darkf joined #nimrod |
07:39:53 | * | kemet joined #nimrod |
08:07:54 | * | brson quit (Quit: leaving) |
08:11:06 | * | bjz joined #nimrod |
08:11:35 | * | kemet quit (Quit: Instantbird 1.5 -- http://www.instantbird.com) |
08:15:35 | * | nanda` quit (Ping timeout: 252 seconds) |
08:18:53 | * | reactormonk quit (Ping timeout: 252 seconds) |
08:21:35 | * | reactormonk joined #nimrod |
08:28:38 | * | BitPuffin quit (Ping timeout: 240 seconds) |
08:45:17 | * | io2 joined #nimrod |
08:51:28 | * | XAMPP-8 joined #nimrod |
09:16:04 | * | kemet joined #nimrod |
09:18:04 | * | kemet quit (Client Quit) |
09:26:25 | * | XAMPP-8 quit (Quit: Leaving) |
10:00:06 | * | kunev joined #nimrod |
10:16:39 | * | kunev_ joined #nimrod |
10:19:11 | * | BitPuffin joined #nimrod |
10:20:41 | * | kunev quit (Ping timeout: 264 seconds) |
10:21:51 | * | kunev_ quit (Quit: Reconnecting) |
10:21:57 | * | kunev joined #nimrod |
11:31:22 | * | Jehan_ joined #nimrod |
11:41:11 | NimBot | nimrod-code/packages master 1c1adea Grzegorz Adam Hankiewicz [+0 ±1 -0]: Adds gh_nimrod_doc_pages to list. |
11:41:11 | NimBot | nimrod-code/packages master 022ed59 Billingsly Wetherfordshire [+0 ±1 -0]: Merge pull request #62 from gradha/pr_gh_nimrod_doc_pages... 2 more lines |
11:50:47 | * | vendethiel quit (Read error: Connection reset by peer) |
11:54:10 | * | ehaliewicz quit (Ping timeout: 276 seconds) |
12:00:18 | * | vendethiel joined #nimrod |
12:05:23 | * | kemet joined #nimrod |
12:06:49 | Klaufir | http://nimrod-lang.org/manual.html#modules : here in the example, after 'import A' it uses A.T1, but after 'import B' it doesn't use B.p(), just p() |
12:07:16 | Klaufir | so, does import pullote the namespace of the module that issues the import statement? |
12:09:00 | EXetoC | yes |
12:09:02 | EXetoC | http://build.nimrod-lang.org/docs/manual.html#from-import-statement |
12:10:00 | * | kemet quit (Client Quit) |
12:10:11 | Klaufir | maybe something is wrong with me, but I can't find the answer in the section you linked |
12:10:46 | Klaufir | or you just show it as best practice? |
12:11:01 | * | vegai left #nimrod (#nimrod) |
12:11:03 | * | saml_ joined #nimrod |
12:11:04 | EXetoC | "t's also possible to use from module import nil if one wants to import the module but wants to enforce fully qualified access to every symbol in module." |
12:11:07 | EXetoC | that seems relevant |
12:11:38 | Klaufir | yes :) |
12:11:48 | Klaufir | thank you |
12:16:59 | * | untitaker quit (Ping timeout: 265 seconds) |
12:22:09 | * | untitaker joined #nimrod |
12:46:17 | * | saml_ quit (Ping timeout: 252 seconds) |
12:48:50 | * | darkf quit (Read error: Connection reset by peer) |
12:58:47 | * | caioariede joined #nimrod |
13:00:09 | Jehan_ | Klaufir: Yes, Nimrod's "import m" is equivalent to "from m import *" in Python. |
13:00:48 | Jehan_ | However, if you have the same procedure defined in both and attempt to use it unqualified, Nimrod will raise an error when trying to compile that. |
13:01:49 | Jehan_ | Same procedure meaning: same name and same type signature (otherwise, normal overloading resolution applies). |
13:12:19 | * | caioariede quit (Ping timeout: 260 seconds) |
13:22:57 | * | vendethiel quit (Read error: Connection reset by peer) |
13:24:31 | * | vendethiel joined #nimrod |
13:24:52 | NimBot | Araq/Nimrod devel 56a912f klaufir [+0 ±1 -0]: adding header pragma for printf ffi example |
13:24:52 | NimBot | Araq/Nimrod devel 24c0044 klaufir [+0 ±1 -0]: header pragma set to '<stdio.h>' in importc section |
13:24:52 | NimBot | Araq/Nimrod devel ead2d4c Dominik Picheta [+0 ±1 -0]: Merge pull request #1238 from klaufir/devel... 2 more lines |
13:30:11 | * | Jehan_ quit (Quit: Leaving) |
13:38:44 | flaviu1 | the parser is a lot less scary than the VM |
13:42:50 | * | vendethiel quit (Read error: Connection reset by peer) |
13:45:49 | * | vendethiel joined #nimrod |
13:57:33 | * | EXetoC quit (Quit: WeeChat 0.4.3) |
14:15:39 | * | BitPuffin quit (Ping timeout: 252 seconds) |
14:18:54 | * | caioariede joined #nimrod |
14:28:40 | flaviu1 | Is anyone here familiar with the compiler? I made it so that `(` is a valid identifier, but it says that its redefining '('. Are accent quoted identifiers usually surrounded by accents? |
14:32:38 | * | Jehan_ joined #nimrod |
14:56:40 | * | Johz joined #nimrod |
15:00:07 | * | EXetoC joined #nimrod |
15:24:46 | * | caioariede quit (Ping timeout: 276 seconds) |
15:36:24 | * | rixx joined #nimrod |
15:36:29 | * | kunev quit (Quit: leaving) |
15:41:56 | * | bjz quit (Ping timeout: 255 seconds) |
15:49:58 | * | caioariede joined #nimrod |
16:07:01 | * | Johz quit (Quit: Leaving) |
16:15:37 | * | xtagon joined #nimrod |
16:17:00 | * | hoverbear joined #nimrod |
16:21:47 | * | caioariede quit (Ping timeout: 252 seconds) |
16:31:05 | * | BitPuffin joined #nimrod |
16:41:46 | * | Matthias247 joined #nimrod |
16:48:57 | * | caioariede joined #nimrod |
16:58:03 | * | xtagon quit (Quit: Leaving) |
17:00:03 | * | xtagon joined #nimrod |
17:08:42 | * | brson joined #nimrod |
17:09:33 | * | q66 joined #nimrod |
17:09:33 | * | q66 quit (Changing host) |
17:09:33 | * | q66 joined #nimrod |
17:28:47 | * | q66 quit (Ping timeout: 252 seconds) |
17:28:57 | Klaufir | Are sequences similar to C++ vector? |
17:30:41 | * | Changaco joined #nimrod |
17:31:10 | dom96 | yep |
17:32:02 | Klaufir | dom96: and the size is always some factor of 2 ? |
17:32:07 | Araq | Klaufir: updated your PR yet? |
17:32:17 | Araq | Klaufir: no. why would it? |
17:32:34 | Klaufir | Araq: PR - give me a second |
17:32:53 | Klaufir | Araq: I mean, that for the C++ for 9 elements they allocate space for 16 |
17:32:55 | * | q66 joined #nimrod |
17:32:55 | * | q66 quit (Changing host) |
17:32:55 | * | q66 joined #nimrod |
17:33:28 | Klaufir | The vector works this way: when running out of the allocated space, it allocates 2 times as much as before |
17:33:35 | EXetoC | size and capacity respectively. seems like common terminology |
17:33:36 | Araq | Klaufir: I know what you mean but the factor of 1.5 has been proven to be superior iirc |
17:33:55 | Jehan_ | Araq: Correct. |
17:34:11 | flaviu1 | Araq: Accent quotes don't work in enums, how should I handle that? Apparently, "redefinition of '('" |
17:34:23 | Klaufir | Araq: So, for sequences its 1.5? |
17:34:31 | EXetoC | s/size/length |
17:34:31 | Araq | Klaufir: yes |
17:35:15 | Araq | flaviu1: not sure what your problem is |
17:35:21 | Araq | gist me your diff please |
17:35:38 | Jehan_ | To be precise, the factor should be the golden ratio or a bit less. |
17:36:02 | Jehan_ | The problem with a factor of two is as follows: |
17:36:14 | flaviu1 | Araq: https://gist.github.com/124c4a4367254288701e |
17:36:35 | Jehan_ | When you grow your memory to size 2^n items, you've so far allocated 1+2+…+2^(n-1) = 2^n-1 items. |
17:36:45 | Jehan_ | Which means that you can't reuse your memory and are wasting half of it. |
17:36:52 | Klaufir | Araq: I am very new to github, should I open a new PR or somehow update the last one ? |
17:37:27 | flaviu1 | What I've done is collapse all the special bracket cases in the tokenizer, which seems to work since `{` is a valid proc name |
17:38:26 | Jehan_ | 1.5 is frequently used because it can be calculated fast (*3/2). |
17:38:55 | Klaufir | Jehan_: You know that modern operating systems don't actually allocate the memory unless you write to it, so even when having 2 GB malloced and 1.1 GB used, only 1.1 GB physical mem will be used, but correct me if I am wrong here. |
17:39:05 | flaviu1 | Araq: Sorry, that gist was missing my uncommited changes: https://gist.github.com/a5f3bbb520c4f0dda829 |
17:39:13 | Jehan_ | Klaufir: After the memory has been used, it will be mapped. |
17:40:06 | Jehan_ | And yes, I know that. |
17:40:46 | Araq | flaviu1: you need to replace add(result, newIdentNodeP(getIdent(tokToStr(p.tok)), p)) with some accumulator string and then do: add(result, newIdentNodeP(getIdent(acc, p)) |
17:41:26 | Araq | Klaufir: you can update the last one but don't ask me how |
17:41:57 | Araq | you can ask Changaco though, he built github ... right? |
17:42:26 | flaviu1 | Klaufir: Just push to your repo, as before, and the PR will be updated |
17:42:32 | Klaufir | Jehan_: given that unwritten memory will not be mapped, why should people worry about the factor of 2 vs 1.5 . What am I missing here? |
17:43:01 | Klaufir | Araq: dom96 seems to have fixed it in the meantime: https://github.com/Araq/Nimrod/pull/1238 |
17:43:43 | Jehan_ | Klaufir: Because you resize a memory block once there's no space left in it, meaning that all locations have been written to and it has been mapped. |
17:44:05 | dom96 | Yeah, I merged the PR. |
17:44:12 | Jehan_ | That's assuming that memory isn't already mapped upon allocation, e.g. because the language demands initialization with zeros. |
17:44:22 | Araq | dom96: nice :P |
17:44:52 | flaviu1 | Jehan_: How about calloc? |
17:44:57 | Klaufir | Jehan_: I see, thanks |
17:45:06 | flaviu1 | Does that do anything to avoid initialization? |
17:45:43 | Jehan_ | flaviu1: On the contrary, calloc() requires that the memory is initialized with zeros. |
17:46:05 | flaviu1 | I know, but does it typically hook into the OS to avoid mapping the memory> |
17:47:09 | Jehan_ | flaviu1: on POSIX, you can rely on mmap() doing that for you when memory is being paged in and when it's fresh memory. |
17:47:39 | Jehan_ | But in general, memory will be reused a lot, and previous code may have written arbitrary values. |
17:48:05 | Jehan_ | Of course, some allocators also unmap blocks upon deallocation. |
17:48:24 | Jehan_ | And OpenBSD somewhat radically uses mmap()/munmap() for ALL big pieces of memory. |
17:49:00 | Araq | nimrod's memory manager does that too, if the OS supports it |
17:49:06 | Araq | most don't |
17:49:30 | Araq | munmap() really has lots of strange behaviour |
17:49:40 | Jehan_ | malloc() generally makes it easier, because there's no requirement that the memory contains any specific values. |
17:50:17 | * | rixx left #nimrod ("Foyfoy!") |
17:50:35 | Jehan_ | flaviu1: By the way, I saw what filwit said in the logs. |
17:51:06 | Jehan_ | flaviu1: You can point him at https://gist.github.com/rbehrends/9bbeef9ee43260b263ee if you see him. |
17:51:17 | flaviu1 | Ok, I'll do that |
17:51:52 | Jehan_ | In general, you need subtype polymorphism only if a memory location (variable, hash table entry, whatever) can actually contain values of more than one type. |
17:52:09 | * | hoverbear quit (Ping timeout: 240 seconds) |
17:52:11 | Jehan_ | That happening AND requiring multiple inheritance/interfaces is pretty rare. |
17:53:38 | Jehan_ | Most of the time you can use parametric polymorphism. For most of the cases where parametric polymorphism isn't good enough (e.g., abstract syntax trees), variant types or single inheritance are sufficient. |
17:54:09 | Jehan_ | That's why the absence of MI in Nimrod doesn't bother me a whole lot. |
17:54:14 | Araq | plu you can always implement MI yourself with enough casts |
17:54:23 | Araq | *plus |
17:54:30 | Jehan_ | Of course, if you don't have parametric polymorphism AND no MI, then you may be screwed. |
17:54:55 | Araq | you can even hide the 'cast' in a 'converter' |
17:55:24 | Jehan_ | Parametric polymorphism also handles some cases that subtype polymorphism doesn't (such as binary operators). |
17:56:05 | Jehan_ | Araq: Or type adapters/views. |
17:57:01 | Araq | Jehan_: I found a way to make Promise a ref without additional overhead ... or at least no obvious overheads |
17:57:15 | Jehan_ | Araq: Nice! |
17:57:34 | Jehan_ | I still would campaign for renaming the type. :) |
17:58:27 | Araq | the implementation kind of sucks though .... I need a static array per thread and if it's full the thread that 'adds' to it needs to wait until the owning thread cleaned it up |
17:59:00 | Araq | this also means that I need a condition var to signal "empty again" |
17:59:18 | Araq | and for this condvar I need a broadcast op |
17:59:42 | Jehan_ | But? |
17:59:44 | * | hoverbear joined #nimrod |
18:00:02 | Araq | well I need to lookup how the WinAPI does a broadcast |
18:00:12 | Jehan_ | You can always chain normal signals. |
18:00:20 | Jehan_ | That avoids the "thundering herd" problem, too. |
18:00:41 | Jehan_ | http://en.wikipedia.org/wiki/Thundering_herd_problem |
18:02:02 | Araq | hmm |
18:02:42 | Matthias247 | Araq: I don't know what exactly you are working on but what speaks against implementing futures and promises like C++(17?) does |
18:03:13 | flaviu1 | Araq: That fixes one bug, but "redefinition of '('" is unaffected. Interesting thing is that `(` works fine as a method name, but not as an enum value. |
18:03:24 | Matthias247 | with the new proposals and the ability to do future.then(...) it's quite nice |
18:03:57 | Araq | Matthias247: C++ uses a shared heap, we don't |
18:04:59 | * | brson quit (Ping timeout: 265 seconds) |
18:05:06 | Matthias247 | so the transmission of the value from one thread to another is the problem? |
18:05:17 | Araq | yes |
18:05:41 | Matthias247 | ok, and you have to store a reference to the future in the promise (which can also be in another thread) |
18:05:42 | Jehan_ | Matthias247: Also, for the historical record, it's not really a C++ invention. |
18:05:50 | Matthias247 | Jehan_: I know |
18:05:54 | Jehan_ | I think C# popularized most of the ideas. |
18:06:36 | Araq | Matthias247: the actual problem is hat the very same thread that allocated, neeed to perform the dealloc |
18:06:41 | Matthias247 | yes, the Task<T> things. And Dart even calls them Future<T> with the same semantics |
18:08:12 | Matthias247 | hmm, you could restrict them to only work within a single thread (like Dart, JS, ...) and only allow other mechanisms for inter-thread-communication |
18:08:24 | Jehan_ | Matthias247: The concept is popular for a reason, namely that it works. There are still plenty of devils to be found in the details, of course. And Araq wants more than just implement such a model, if I understand him correctly. |
18:08:53 | Jehan_ | Matthias247: But that's pretty much the point of them. :) |
18:09:08 | Araq | Matthias247: that's exactly what we are doing. we Future[T] for the async stuff and Promise[T] for the inter-thread-communication |
18:10:32 | Jehan_ | I'll also add that while C++ has the benefit of a shared heap, RAII-based reference counting and concurrency go together like gasoline and fire. :) |
18:10:57 | Matthias247 | oh, ok. Does this then mean that Promise has a different meaning than the c++ promise (which completes a future<T>)? |
18:11:18 | Araq | flaviu1: I don't know about your redef problem. tried a .pure enum? |
18:11:21 | dom96 | I don't think our Future and Promise distinction is similar to C++'s at all. |
18:11:33 | EXetoC | Araq: how much work is it to make dom.nim usable you think? perhaps the first thing should be to allow the vars in the module scope to be referenced: "document* {.importc, nodecl.}: ref TDocument ..." |
18:11:57 | Jehan_ | dom96: I haven't looked at the async stuff yet at all, but I suspect that's because you use it differently from their established meaning? |
18:11:57 | Araq | Matthias247: yes, it's completely different. Better name for "Promise" is still welcome |
18:12:10 | dom96 | But I should probably learn more about the semantics in C++ |
18:12:16 | Jehan_ | Araq: I suggested Pending[T]. |
18:12:30 | flaviu1 | Araq: Seems to work, but strange that enums with names like that are invalid while procs are not. |
18:12:50 | Araq | flaviu1: it's not invalid, you have redefinition problem |
18:13:00 | Araq | you have some other '(' somewhere |
18:13:16 | dom96 | EXetoC: What's unusable about the dom module? |
18:13:33 | flaviu1 | Araq: You're right, my bad |
18:13:54 | dom96 | C++ also has std::async |
18:14:14 | Matthias247 | dom96: the future is similar. But the c++ version is currently restricted in the domain that nimrod covers but in the end will be a littlebit more general |
18:14:19 | EXetoC | dom96: did you read the last part? that seems essential |
18:14:20 | * | kunev joined #nimrod |
18:14:54 | Matthias247 | dom96: do you know this? If not - it's a good lecture ;) http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3784.pdf |
18:15:16 | EXetoC | those vars can't be referenced because they aren't defined in the generated js source |
18:15:20 | Matthias247 | and it goes together with this, what is similar to your Dispatcher: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3785.pdf |
18:15:29 | Araq | Jehan_: meh fine. I'll rename it to Pending |
18:15:58 | dom96 | Araq: I prefer Promise |
18:16:30 | dom96 | Matthias247: interesting thanks |
18:16:31 | Jehan_ | Araq: Not trying to push you. I don't think Pending is a great name, but Promise is (1) CS slang and (2) the established meaning has slightly different connotations. |
18:16:59 | Jehan_ | Which means that it's likely to confuse both Joe Average Programmer types and experts. |
18:17:56 | Matthias247 | dom96: the continuation functinonalitys with future.then, the unwrap and the ability to combine multiple futures into one (future.when_any) would for sure also be helpful for nimrod |
18:18:57 | Jehan_ | So, the async futures are strictly for a single-threaded case? |
18:19:44 | dom96 | nope |
18:20:02 | Jehan_ | Hmm, then I misunderstood what Araq said earlier. |
18:20:30 | dom96 | In the future the dispatcher will hopefully try to use all your CPU cores which means multiple threads. |
18:21:10 | Jehan_ | In that case, it sounds like the concepts should be unified. |
18:21:24 | * | askatasuna joined #nimrod |
18:21:33 | Jehan_ | And not have one task/future/promise in one place and a somewhat different, but incompatible one, in another. |
18:21:39 | dom96 | Indeed. That is what I think too. |
18:23:04 | Matthias247 | using multiple threads in a dispatcher is not easy. If you then manipulate things from the "dispatcher callbacks" you have to care about synchronization. |
18:23:05 | dom96 | Matthias247: Doesn't 'await' mostly remove the need for 'then', 'when_any' etc? |
18:23:14 | Araq | async futures ARE strictly for the single-threaded case |
18:23:40 | Matthias247 | dom96: no. For example you may want to start an async operation and a timeout in parallel. And wait for which completes first |
18:23:45 | Araq | dom96 simply is overly optimistic we'll get a shared memory GC |
18:24:16 | Matthias247 | dom96: or you want to send data to 100 clients and wait for all to complete with a single operation |
18:24:29 | Matthias247 | you can then use await to wait on the combined future |
18:24:46 | Matthias247 | that's the reason why C# provides both possibilities |
18:25:05 | Jehan_ | dom96: The classical use case is as follows: You have two algorithms to solve a problem, but cannot tell which one is better. So you run both in parallel, use the result of the one that finishes first and stop the other. |
18:25:12 | dom96 | Matthias247: indeed that's a good use case. |
18:26:24 | dom96 | Araq: Alright. How do I spread my async code across all cores then? |
18:26:28 | Jehan_ | I've also implemented a concept of milestones, which is sort of like a barrier, except all the ways in which barriers are broken. |
18:27:32 | Jehan_ | Milestones is basically an abstract value that tasks can contribute to on completion; once a threshold is reached (not necessarily numerical, milestones can be sets, for example), the execution of another task is triggered. |
18:27:52 | Araq | dom96: I don't know. |
18:28:03 | Matthias247 | dom96: that's hard and works only for some use-cases. I think you would need different kind of dispatchers |
18:28:31 | Matthias247 | like the C++ paper describes. One that uses only a single thread and others that can use multiple-threads (encapsulate a thread-pool) |
18:28:46 | Araq | dom96: every async 'task' can use 'parallel' statements for instance |
18:30:00 | Jehan_ | dom96: The only tricky part is really sending data between threads. And that's not really all that tricky (the hard part is efficiently determining the lifetime of objects that multiple threads may be accessing). |
18:30:23 | dom96 | Araq: My plan was to spawn x number of threads (where x is the number of cores) and have each thread poll the dispatcher. |
18:30:48 | Matthias247 | offloading some calucations into other threads works quite good. But doing the whole logic in a bunch of threads ends often in a mess. That were my experiences |
18:31:34 | dom96 | That would likely lead to race conditions. I haven't really given much thought to the issues that it creates. I simply wanted to try it but that odd TLS emulation bug stopped me. |
18:31:46 | Jehan_ | dom96: The first problem you have with that is where to store the dispatcher. |
18:32:10 | Matthias247 | dom96: That was the strategy that I first used with boost asio. Have one io_service and poll it from multiple threads |
18:32:13 | dom96 | Jehan_: In a shared global variable. |
18:32:21 | Matthias247 | I really would recommend noone to do that :) |
18:32:24 | dom96 | A bigger problem is the closures. |
18:32:29 | Jehan_ | dom96: And how do you allocate memory? |
18:32:41 | Jehan_ | shared heap, or thread-local heaps? |
18:32:51 | dom96 | shared |
18:33:16 | Jehan_ | In which case the lock around the shared heap's allocation function can become a bottleneck. |
18:34:04 | dom96 | It may in fact be simply too complex. |
18:34:15 | dom96 | brb |
18:34:46 | Jehan_ | Can be circumvented by allocating equal-sized memory chunks in batches and having threads allocate from a free list. |
18:35:01 | Jehan_ | There are other solutions, too, but you have to think about how you want to do that. |
18:36:33 | Matthias247 | hmm, unfortunatly Swift didn't show up what they do regarding concurrency. At least not in the short overviews |
18:37:45 | Araq | Jehan_: please review https://gist.github.com/Araq/a37c1b27e1900bd2ca2a |
18:38:24 | Matthias247 | but I guess it will be similar to objective-c. Each thread get's mainloop with GCD and you can push closures to be executed on any of them |
18:38:32 | Araq | and I think I should start to use atomicLoad/Store ... |
18:41:13 | Jehan_ | Araq: Still looking at it, but the problem with any atomic operations is portability. |
18:41:31 | flaviu1 | Jehan_: No portability issues with LLVM IR :P |
18:42:06 | Araq | why? it's 2014. The C compiler either supports atomic ops or is irrelevant |
18:42:17 | Jehan_ | flaviu1: If you want to/can generate LLVM code. I hadn't realized they had added that. |
18:42:39 | flaviu1 | Jehan_: Yes, and its very fine-grained |
18:42:40 | Jehan_ | Araq: Is it part of the C standard yet? C++ yes, but C? |
18:42:50 | Jehan_ | flaviu1: Define fine-grained? |
18:43:23 | Araq | C11 has it too, but it's irrelevant, every C compiler we officially support has them |
18:44:31 | Jehan_ | Araq: I don't think your code is safe with respect to memory reordering. |
18:44:39 | flaviu1 | Jehan_: "NotAtomic, Unordered, Monotonic, Acquire, Release, AcquireRelease, SequentiallyConsistant". I can't remember what they mean, but I remember being impressed. http://llvm.org/docs/Atomics.html |
18:45:29 | Araq | # XXX we really need to ensure no re-orderings are done |
18:45:31 | Araq | # by the C compiler here |
18:45:35 | Araq | :P |
18:45:48 | Jehan_ | Not the C compiler. |
18:45:59 | Jehan_ | The processor. |
18:46:25 | Araq | well I will use atomicLoad/Store eventually which imply a fence |
18:47:06 | Araq | but only for the b.interest location, the rest should be fine |
18:47:13 | Jehan_ | It's not just b.interest, also the other memory accesses. |
18:47:49 | Araq | like? |
18:48:15 | Jehan_ | Like "inc b.entered" |
18:48:34 | Jehan_ | That may not become visible to other threads for a while. |
18:48:59 | Jehan_ | You're trying hard to avoid mutexes, but I'd first try to make the code correct with mutexes. |
18:49:06 | Jehan_ | Then you can try to optimize them away later on. |
18:49:42 | Araq | well ok, actually I assume stores to words are atomic |
18:50:06 | Araq | which is true for any sane processor in existance |
18:50:16 | Jehan_ | They are atomic, but processors may reorder them with respect to other stores or loads. |
18:51:20 | Jehan_ | A modern x86 processor can have a few dozen loads/stores in flight at any given time, and they're one of the better behaved types (ensuring TSO). |
18:51:32 | Jehan_ | Unlike, say, ARM processors. |
18:51:48 | Araq | also I don't avoid mutexes, I also avoid creating a condition variable + signal |
18:52:39 | Jehan_ | For example, barrierLeave may decide to signal because it hasn't seen an increment of b.entered yet. |
18:53:08 | Jehan_ | You're still signaling when everything's done, right? |
18:53:20 | Araq | not necessarily |
18:53:51 | Araq | I still don't see the problem with memory reorderings |
18:53:54 | Jehan_ | The code should be exactly the same otherwise even if you put lock/unlock around all the barrier procedures. |
18:55:17 | Jehan_ | Araq: For example, barrierLeave signaling because it falsely thinks that b.left == b.entered because it hasn't seen the effect of 'inc b.left' yet. |
18:57:43 | Araq | shouldn't atomicInc prevent that? |
18:58:14 | Jehan_ | Araq: That may affect how b.entered is perceived (depending on what memory barrier it actually uses and when). |
18:58:50 | Jehan_ | Eh, I've switched entered and left around there. |
18:59:12 | Araq | öhm |
18:59:22 | Jehan_ | But there are zero guarantees about when other cores will see the effect of "inc b.entered". |
19:00:06 | Jehan_ | inc b.entered will write the value to the L1 cache. When and how it will be propagated to L3 or main memory is something that you can't tell. |
19:01:02 | Jehan_ | The point of surrounding critical regions with lock/unlock is twofold: (1) mutual exclusion and (2) making sure that any changes you made within the critical region will be seen by other cores. |
19:02:10 | Araq | meh I'd rather insert fence instructions |
19:02:44 | Jehan_ | Araq: That has premature optimization written all over it. |
19:03:41 | Jehan_ | There are thousands of ways you can shoot yourself in the foot working without locks. |
19:04:00 | Jehan_ | Unless there's an established need, I'd recommend avoiding it. |
19:05:00 | Jehan_ | Monitors (i.e., mutexes + condition variables) work. They have strong guarantees, and they've been tested to a fare-thee-well. |
19:05:33 | Jehan_ | They may occasionally be inefficient, in which case you carefully and selectively replace them with a lockfree implementation. |
19:07:21 | * | flaviu1 quit (Remote host closed the connection) |
19:07:22 | Jehan_ | But there's nothing cool or inherently superior to working without locks. More often than not, it just leads to buggy code. |
19:08:14 | Araq | I'll consider it |
19:08:28 | Jehan_ | I recommend reading "Double-Checked Locking is Broken" (http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html). |
19:08:45 | Araq | yeah I know about that |
19:08:53 | Jehan_ | Tons of people think they can make it work, but very few understand how. |
19:09:21 | Araq | and yet who really had a bug with it |
19:20:44 | * | brson joined #nimrod |
19:22:06 | Araq | Jehan_: why would I even use 2 counters when I use a lock for it anyway? |
19:22:55 | Jehan_ | Araq: Depending on how you implement it, you may be able to get away with one counter. |
19:23:14 | Jehan_ | Araq: Two-counter implementations may be useful for reusing the data structure, though. |
19:24:05 | dom96 | back |
19:24:06 | Jehan_ | Because their "done" state is different from the "just started" state. |
19:25:02 | dom96 | In regards to parallelising async, if we want to beat Go on benchmarks we need to do it. |
19:28:03 | Araq | dom96: we can come up other benchmarks though |
19:28:48 | Matthias247 | dom96: go will also only perform good when the application is parallelized (split into many goroutines) |
19:29:08 | Matthias247 | that probably easy for servers (on routine for each client), but still hard for anything else |
19:31:06 | Araq | Jehan_: the check in closeBarrier is often true and then we can avoid quite some overhead. it's "premature" because I didn't measure it but then it's a stdlib implementation and you never know what it is used for |
19:31:19 | Araq | *premature optimization |
19:31:41 | * | flaviu1 joined #nimrod |
19:32:07 | Jehan_ | Araq: A fairly safe way of doing this without locks is to store all the data in a single word that can be updated with a single CAS operation. That does limit you to something like a max value of 2^15 for the counters on 32-bit systems, though. |
19:32:30 | dom96 | Matthias247: It seems that is good enough to impress a lot of people :P |
19:33:36 | Matthias247 | dom96: node also impresses people - and I have no clue why ;) |
19:33:44 | Jehan_ | I'd still go with a simple and robust implementation first. "Make it work, make it right, make it fast." |
19:35:21 | Araq | the problem is: It is robust in practice no matter what I do :P |
19:35:57 | Araq | it already works, so I'm after "make it fast" |
19:36:17 | Jehan_ | In practice meaning "on x86 processors"? |
19:36:54 | Araq | in practice means on 86 for toy programs |
19:37:52 | Jehan_ | The thing is that something working on x86 or SPARC processors may give you an impression of reliability (because TSO can screw you over in only very limited ways) that does not apply to other hardware. |
19:38:37 | Jehan_ | In any event, how big is the maximum number of jobs that you can have participating in your barrier? |
19:39:17 | Jehan_ | I.e. how high can the counters go at most? |
19:39:57 | Araq | 2 billion on a 32bit arch or something |
19:40:11 | Araq | it's hard to tell |
19:40:18 | Jehan_ | On 64-bit? |
19:41:53 | Araq | well usually you use it with an array, so you index that, so its upper bound is high(int) |
19:42:22 | Jehan_ | Gotcha. |
19:42:37 | Jehan_ | I was wondering if there was a practical limit imposed by other constraints. |
19:44:36 | Jehan_ | The check in closeBarrier that you say is often true is "if b.left != b.entered"? |
19:45:06 | Araq | well it's false, b.left == b.entered, aka "all threads have finished" |
19:45:21 | Jehan_ | Yeah, that's what I meant. |
19:45:44 | * | io2 quit (Quit: ...take irc away, what are you? genius, billionaire, playboy, philanthropist) |
19:46:05 | Jehan_ | But the overhead that you can save here is really minuscule compared to everything else that went into the rest. |
19:46:38 | Araq | usually you can sink another statement into the 'parallel' section and then 'b.left == b.entered' becomes even more likely |
19:46:41 | Jehan_ | You may save a hundred clock cycles, but there are probably thousands involved in just waking up worker threads. |
19:47:31 | Araq | well yes. but this "wakeup" thing can be optimized too |
19:48:31 | Jehan_ | In what way? |
19:50:03 | Araq | by some medium amount of busy waiting |
19:50:36 | Jehan_ | That'll still be more expensive. :) |
19:51:20 | Jehan_ | The problem with the efficiency of the parallel section is that cores will be sitting idle (or doing busy waiting) at the beginning when everything is being spun up and at the end when everything is being shutdown again. |
19:52:15 | Jehan_ | For a parfor statement to be reasonably efficient, each job needs to do a non-negligible amount of work so that the wasted time at the beginning and end don't count for much. |
19:52:43 | Jehan_ | At that point, a bit of synchronization isn't going to hurt you, either. |
19:55:24 | Jehan_ | This is also why everybody is suddenly so keen on implementing tasks/futures/whatevertheycallit + workstealing (plus whatever language features are needed to make it usable). Because it beats the hell out of any other known approach to maximum CPU utilization. |
20:04:11 | Araq | *shrug* parfor is still the basis for GPU programming which beats the hell out of CPU programming; IF it can be used, of course |
20:17:03 | Araq | seriously ... inserting memory fences is sseductive. what can possibly go wrong then? |
20:17:52 | * | brson quit (Remote host closed the connection) |
20:19:17 | * | kunev quit (Quit: leaving) |
20:19:19 | * | brson joined #nimrod |
20:22:38 | fowl | i couldnt find a timer lib |
20:23:09 | fowl | i had to write my own, like an animal :( |
20:23:33 | Araq | there is stuff for it in system/, fowl |
20:25:07 | fowl | Araq, my hands are bleeding from writing 20 lines tho D: |
20:30:48 | * | Matthias247 quit (Read error: Connection reset by peer) |
20:42:03 | fowl | can you check my prs |
20:42:18 | fowl | 1243 and 1174 |
20:42:37 | Araq | sorry I'm busy |
20:42:53 | fowl | what am i paying you for? |
20:43:10 | fowl | dance! |
20:43:23 | fowl | brb |
20:49:18 | Jehan_ | <grumpyoldman>I need a lawn so that I can properly tell kids to get off it.</grumpyoldman> |
20:50:00 | flaviu1 | Jehan_: Why would you use XML when you could use `:`? |
20:50:08 | Araq | Jehan_: gah, your suggestion of keeping the cond vars in a list is complex |
20:50:43 | Jehan_ | I made a suggestion to keep cond vars in a list? |
20:50:54 | * | Jehan_ must really be getting old. Can't remember that. |
20:51:02 | Araq | in fact it's brain damaging |
20:51:03 | Jehan_ | flaviu1: You lost me? |
20:51:22 | flaviu1 | grumpyoldman: I need a lawn so that I can properly tell kids to get off it. |
20:51:29 | Araq | Jehan_: yes, we talked about broadcasts |
20:51:42 | flaviu1 | You save ~18 characters |
20:52:00 | Jehan_ | Araq: Hmm, I may have been less than clear. |
20:52:29 | Araq | in fact ... with that solution I need to ensure the node is never ever added twice to the list |
20:52:30 | Jehan_ | Araq: The point was not to keep cond vars in a list. Just have one. When a thread receives a signal, it processes it and signals again. |
20:52:42 | Araq | ouch |
20:52:49 | Araq | that's way easier ... |
20:53:01 | Jehan_ | Sorry. :( |
20:53:26 | Jehan_ | flaviu1: You win. Have a cookie. :) |
20:54:31 | Jehan_ | flaviu1: http://www.wunderkessel.de/galerie/showphoto.php?photo=13445 |
20:54:48 | flaviu1 | Haha, looks delicious |
20:55:22 | flaviu1 | Araq: Is there a reason that accent quoted things aren't just one token? |
20:55:41 | flaviu1 | If there isn't, I'd like to make them one token |
20:55:58 | Araq | yeah. let that be. |
20:56:28 | Araq | `foo templParam` can be used for identifier construction |
20:56:50 | Araq | like C's ## preprocessor operator |
20:57:03 | flaviu1 | ok, thanks, I forgot about that |
20:57:05 | Jehan_ | Yes, I saw that a while ago and thought that it was a very nifty way to go about it. |
20:58:03 | * | ehaliewicz joined #nimrod |
21:00:13 | Araq | Jehan_: well it's not easier either way |
21:00:35 | Jehan_ | Araq: Difficult to tell without the context. |
21:00:56 | Araq | the context is: I have a fixed size array, threads append to it |
21:01:12 | Araq | when it's full threads have to wait until it is empty again |
21:02:21 | Araq | now the thread that gets waked up can wake up another thread IF there is still space left in the array |
21:03:15 | Araq | but then there is no way to tell if another thread is even waiting for that event |
21:03:51 | Araq | doesn't matter, I guess |
21:03:59 | Jehan_ | Have a counter for the number of waiting threads? But in the worst case, the signal just gets discarded. |
21:04:27 | Jehan_ | Can this deadlock, by the way, if the array gets full and no thread is running to empty it? |
21:05:14 | Araq | yup, but only if the owning thread crashed and then you have other problems |
21:06:16 | Jehan_ | Yeah. :) |
21:09:40 | Araq | hmm in general polling is still much easier than condition variables |
21:11:34 | * | Raynes quit (Ping timeout: 240 seconds) |
21:12:02 | Jehan_ | Yes, but they're generally for different use cases. |
21:12:12 | Jehan_ | Polling requires that you have a free processor. |
21:12:34 | Jehan_ | And that no other threads may push the polling thread off it. |
21:12:34 | * | eximiusw1staken quit (Ping timeout: 240 seconds) |
21:13:32 | * | eximiuswastaken joined #nimrod |
21:14:12 | * | Raynes joined #nimrod |
21:14:26 | * | Raynes quit (Changing host) |
21:14:26 | * | Raynes joined #nimrod |
21:14:27 | Araq | well i can easily sleep(10) and then sleep more -> CPU is free to run another thread -> thread pool notices not all cores are busy and creates another thread |
21:15:33 | Araq | but I have no idea how that compares wrt efficiency to the "proper way" of using condition variables |
21:16:32 | Jehan_ | Condition variables have their own issues, by the way. Such as that the POSIX standard technically does not guarantee any kind of fairness. |
21:17:27 | Jehan_ | I don't know of any OS stupid enough to actually exploit that, but you never know. |
21:18:36 | Araq | I never think about fairness, it's already hard enough to create solutions that work |
21:19:10 | Jehan_ | Well … fairness is part of any solution that works. :) |
21:19:35 | Araq | I knew you would say that :P |
21:20:00 | Jehan_ | :) |
21:20:32 | Araq | but again, where are all the stories about "oh, our Java program crashed and it was due to incorrect doubly checked locking"? |
21:20:38 | flaviu1 | Araq: Write a macro for Java syntax, then copy-paste code. The java guys have really nice concurrency. |
21:20:52 | flaviu1 | Libraries |
21:21:52 | Araq | it's like the very common overflow bug in binary search algorithms |
21:24:13 | Araq | flaviu1: where is the fun in that? |
21:24:28 | Jehan_ | Araq: Most Java programs don't particularly fine-tune their concurrency. |
21:24:42 | Jehan_ | They just use basic monitor semantics for the most part. |
21:24:57 | Jehan_ | The JVM implementors do pull a few tricks, on the other hand. |
21:25:16 | Jehan_ | Google "biased locking", for example. |
21:25:16 | Araq | I've seen lots of doubly checked locking in C# |
21:25:55 | Jehan_ | Araq: It doesn't blow up in C# because C# pretty much runs on x86 only and double-checked locking is perfectly safe on a TSO architecture. |
21:26:19 | * | Mat3 joined #nimrod |
21:26:22 | Mat3 | hi all |
21:26:34 | Jehan_ | If you're only interested in x86, then you can almost make as many assumptions as though you were using Pth. :) |
21:26:42 | Araq | sure but my point is: before it fails because of that, it fails earlier because of other threading bugs |
21:26:55 | Araq | or because of OOM |
21:27:15 | Araq | very common in heavily GC'ed languages in fact |
21:27:17 | Jehan_ | Possibly. Someone who can't get double-checked locking right will probably screw up a lot more than that. |
21:27:43 | Jehan_ | And you can do double-checked locking correctly if you know how to use memory barriers. |
21:28:17 | Jehan_ | I've written a hell of a lot of code that avoids locks and condition variables, and I still fall back to them whenever I can. |
21:28:31 | Jehan_ | Simply because the headaches of doing it correctly without are rarely worth it. |
21:28:44 | Jehan_ | And most of the time, you don't gain any speed, anyway. |
21:28:47 | Mat3 | hmm, interesting |
21:28:57 | Jehan_ | Double-checked locking, ironically, is one of the few examples where it's actually worth it. |
21:29:25 | Jehan_ | But in the end, the overhead of synchronization is primarily because any synchronization primitive has to bypass caches and write to main memory. |
21:29:48 | Jehan_ | But that's also true for anything where one thread needs to communicate with a thread on another core. |
21:30:02 | Jehan_ | I.e. you need to push the data to main memory, anyway, and pay the price. |
21:30:34 | Jehan_ | Double-checked locking is the odd exception because once you're past initialization, it's essentially constant on all cores and doesn't get modified any further. |
21:31:10 | Araq | most recent intel CPUs support direct inter CPU communication bypassing the main memory, i think |
21:31:12 | Jehan_ | Memory barriers can also be pretty expensive. |
21:31:50 | Jehan_ | Memory barriers are basically used to exploit the fact that most are no-ops on x86 and still guarantee correctness on other processors. |
21:32:23 | Jehan_ | Araq: In a way, but that's also not cheap. |
21:32:53 | Jehan_ | And where they can, they can also make locks cheap in the same fashion. |
21:34:41 | Jehan_ | The basic operation underlying a lock is basically a read-modify-write that's guaranteed to be atomic and seen by all processors. |
21:35:12 | Varriount | Meep |
21:35:39 | Varriount | dom96: I read the logs at work. Looks like the asyncio api is going to need some changes in order to make it threadable. |
21:39:03 | Jehan_ | Varriount: The one thing I'd like to have for this and similar features are multiple GCed shared heaps. |
21:39:10 | * | askatasuna quit (Ping timeout: 276 seconds) |
21:39:20 | Jehan_ | Basically, thread-local heaps without actual threads controlling them. |
21:39:51 | dom96 | Varriount: Perhaps. But I have no idea what those changes are. |
21:40:02 | Araq | that's quite easy to expose, Jehan_ but also incredibly dangerous |
21:40:16 | Mat3 | Araq: CoMA systems should do that I think |
21:40:16 | Jehan_ | Araq: Yeah, I know, I did look at it. |
21:40:22 | Varriount | Jehan_: Do you think simply emulating the windows io notification queue could work for multithreaded IO? |
21:40:30 | Jehan_ | And I agree with the dangerous part. |
21:40:44 | Jehan_ | Varriount: I don't know much about Windows. |
21:40:59 | Varriount | Jehan_: Do you know what a proactor is? |
21:41:18 | Jehan_ | Varriount: Yup. |
21:41:33 | Varriount | Jehan_: It's that. |
21:42:00 | Jehan_ | Varriount: Still doesn't tell me enough. |
21:42:31 | Jehan_ | It does depend on the OS kernel what's efficient and what isn't. |
21:42:38 | Varriount | Jehan_: On windows, you can make a call which returns immediately, then poll a notification queue for a completion event. |
21:42:38 | * | hoverbea_ joined #nimrod |
21:42:48 | Varriount | *an IO call |
21:43:00 | Jehan_ | Varriount: Yeah. |
21:43:18 | Jehan_ | The question is, is it efficient? |
21:43:30 | Varriount | Jehan_: In this case, yes. |
21:43:51 | Jehan_ | Oh, wait, you want to emulate it for asyncio? |
21:43:56 | Jehan_ | Rather than using it? |
21:44:04 | Jehan_ | I think I misread your original question. |
21:44:09 | Varriount | Jehan_: If multiple threads wait on the same queue, the OS actively selects a handful of threads to wake up and send notifications to. |
21:44:58 | Varriount | Jehan_: I'm wondering if that sort of model could be used with Nimrod's threading model to implement efficient asyncio |
21:45:58 | * | hoverbear quit (Ping timeout: 240 seconds) |
21:46:07 | Araq | Varriount: IMO that's the wrong question to ask. If you're IO bound, you're not CPU bound by definition |
21:46:46 | Jehan_ | Varriount: I don't think there's an easy answer. As far as I know, libevent is a non-trivial piece of code. |
21:46:54 | Varriount | Araq: Then what's the right question? |
21:47:53 | Varriount | It's also a matter of avoiding the 'thundering herd problem' with threading and IO |
21:47:57 | Jehan_ | I'd hook into libevent rather than reinventing it if I wanted to do fast multi-threaded I/O. |
21:48:19 | Araq | Varriount: why is Varriount still not happy with the existing async IO? |
21:48:56 | Varriount | Because Varriount wants scalable IO, like it says on the tin. *looks at dom96* |
21:49:20 | Araq | Jehan_: I've heard that a lot. we created our own for lots of reasons |
21:49:38 | Varriount | If you have multiple threads that all use a posix-like poll() on a single socket, you get a bunch of thread contention as they fight for resources. |
21:50:11 | Varriount | If you use only a single thread, you are limited in how well you can scale. |
21:50:11 | Jehan_ | Araq: There are plenty good reasons to create your own (such as not having a gazillion external dependencies). |
21:51:03 | Varriount | Araq: See http://msdn.microsoft.com/en-us/library/windows/desktop/aa365198(v=vs.85).aspx |
21:51:14 | Araq | Varriount: only MongoDB and node.js are webscale anyway |
21:51:25 | Araq | and /dev/nul of course |
21:51:40 | Varriount | -_- |
21:51:43 | Jehan_ | Varriount: If you want scalable rather than "maximum speed attainable", then the question becomes easier. |
21:51:43 | flaviu1 | Araq: Only if its over the internet, as a service |
21:52:02 | Varriount | Araq: And neither of those are written in Nimrod, I notice. |
21:52:04 | EXetoC | that never gets old |
21:52:45 | flaviu1 | EXetoC: http://devnull-as-a-service.com/ |
21:52:50 | Mat3 | does Nimrod support actors ? |
21:53:03 | flaviu1 | Yes, apparently they're slow though |
21:53:05 | * | Changaco quit (Ping timeout: 264 seconds) |
21:55:18 | Mat3 | good to know, thanks |
21:55:42 | * | askatasuna joined #nimrod |
21:56:57 | dom96 | Araq: Async gets rid of the IO bottleneck so CPU only remains. |
21:57:42 | dom96 | Most benchmarks test how fast you can accept connections and serve their request, and this becomes CPU bound quickly. |
21:58:04 | dom96 | Because you always have a client connecting to your server socket. |
22:03:07 | Araq | how the times change. I thought web apps always wait for a database query or a connection. Nowadays they are all CPU bound... |
22:03:43 | Jehan_ | Well, we're talking about benchmarks. :) |
22:03:54 | Jehan_ | Which may or may not have a connection to actual practice. :) |
22:04:13 | Jehan_ | According to most benchmarks, Ruby on Rails is completely unusable. |
22:05:34 | Araq | dom96: can you please tell me why using 4-8 processes on a 4 core CPU is not as efficient for these benchmarks? |
22:05:35 | dom96 | Yeah, you're right. It's silly to optimise for these sorts of benchmarks. But then what do we optimise for? |
22:05:57 | dom96 | Being faster than Go differentiates us in some way at least. |
22:06:09 | dom96 | But I guess I can forget about achieving that. |
22:06:48 | dom96 | Araq: Like I said multiple times already, I cannot spawn the processes dynamically for the framework benchmarks. |
22:07:02 | dom96 | I have to hard code a number in. |
22:07:16 | * | filwit joined #nimrod |
22:07:18 | Araq | eh ... what? |
22:07:27 | filwit | hey Jehan_, you around? |
22:07:50 | filwit | i read logs, and saw your response to my earlier gist |
22:07:59 | Varriount | dom96: Could you explain, if not for Araq, then for my sake? |
22:08:17 | dom96 | Varriount: Araq: Say I have 8 processes, each needs to listen on a different scgi port. Nginx needs these ports to be hard coded into its config file. |
22:08:40 | Jehan_ | dom96: I wasn't criticizing you for optimizing for benchmarks. It's silly that the game is played this way, but there's little you can do about it. |
22:08:45 | dom96 | Now if the CPU has only 4 cores and I spawn 4 processes then the config file needs to be edited. |
22:09:23 | Jehan_ | dom96: I am still amused by the "benchmarks über alles" attitude that ignores a lot of other important concerns. |
22:09:44 | Jehan_ | filwit: Yeah, intermittently afk, but I'm here. |
22:09:55 | Araq | dom96: we can hack around that limitation easily |
22:09:57 | dom96 | Right now because I have rewritten the http server we can stop using nginx. |
22:10:03 | dom96 | And we should do that |
22:10:17 | dom96 | But that prevents your approach too |
22:10:38 | dom96 | Unless I write a load balancer |
22:11:00 | Araq | in any case, sending an incoming socket to a thread is hardly a problem, even with nimrod's thread local GCs |
22:11:09 | dom96 | which will do what nginx does with scgi... |
22:11:28 | dom96 | This approach is simply not elegant. |
22:11:48 | filwit | Jehan_: oh good, you're here. The problem I see with your gist, is that it's (as i was expecting) fundamentally different than mine in that I use a reference of an interface type to point to "higher order" behavior dynamically. I guess "design differently" does apply to most cases, true. |
22:11:50 | dom96 | If you want to sell "Yeah, we support scaling. Just spawn multiple processes" then by all means go ahead. |
22:12:00 | dom96 | But I am not convinced that people will buy that. |
22:12:43 | Jehan_ | filwit: Yes. The point I was trying to get at is that dynamic dispatch is unneeded most of the time. |
22:13:10 | Varriount | It all boils down to a problem of architecture: How do we present a usable, scalable, and asynchronous approach to IO? |
22:13:25 | filwit | Jehan_: I don't think i illustrated my point as well as I should have by using single pointers (a & v in my gist) to illustrate instead of `seq[IAction]`, etc |
22:13:36 | flaviu1 | Varriount: Pick 2 of 3? |
22:13:56 | filwit | Jehan_: i understand that often dynamic dispatch can be avoided, and should be when it's possible |
22:14:00 | Jehan_ | filwit: And that it's extremely rare to need to dispatch based on two different hierarchies. |
22:14:24 | * | ehaliewicz quit (Read error: Connection reset by peer) |
22:14:35 | Jehan_ | filwit: My claim was never that MI is useless, but that it's needed very rarely and obviated mostly by other features that Nimrod has. |
22:15:12 | Mat3 | Varriount: Actors ? |
22:16:14 | filwit | Jehan_: well, that's kinda why i was asking you for and example of a design pattern that would match. For instance, If imagine you have an GUI system. Button and Slider are both `GUI` objects, and can be added to the same seq[GUI] list, however, they might also need to be derived of different things (VIsual for example, where Visual adds functionality, not just enforces protocol). |
22:16:24 | * | ehaliewicz joined #nimrod |
22:16:34 | filwit | Jehan_: in these situation, is there a better approach than MI? |
22:17:35 | Araq | ah here we go, UIs need OO |
22:17:35 | Jehan_ | filwit: heterogeneous containers (such as ASTs or seq[AbstractType]) are the classical example of where you need dynamic dispatch, but once you get to looking at examples where you actually need them, they crop up pretty rarely. |
22:18:02 | * | hoverbea_ quit () |
22:18:03 | filwit | Jehan_: I was thinking that even in my `trait` design, I would attempt to lower to this soft of generic (non-dynamic) behavior if possible (through use of 'prov' vs 'method'), but I can't imagine a better (more straight forward) approach to tackle dynamic dispatch than MI (except procvar lists, but those have their own issues) |
22:18:56 | fowl | components will save you from your oo nightmares |
22:19:34 | Jehan_ | filwit: The question is whether you even want/need dynamic dispatch most of the time. |
22:19:42 | fowl | in fact, you use modules, your code is component oriented, you just have to apply that to your widgets/objects |
22:19:44 | filwit | fowl, yes, but they also require the programmer to write maintenance code... which is sorta what my Trait thing is, actually |
22:19:59 | fowl | :rolleyes: |
22:20:36 | Varriount | fowl: This isn't github - Most clients don't have that kind of smiley parsing. |
22:20:50 | Jehan_ | filwit: As a workaround, you can always use views + Nimrod methods to put objects in more than one type hierarchy. |
22:21:10 | fowl | :poop in a bucket: |
22:21:17 | filwit | Jahan_: yes, but I gave a specific example of where dynamic dispatch is useful (seq[GUI]), and I was wondering what better design pattern could accomplish it (short of a component, procvar list, manager) |
22:21:40 | Jehan_ | As I said, ideally you do want MI. It's just that I don't think its a killer feature as long as you have parametric polymorphism and some sort of variant types or single inheritance. |
22:22:04 | filwit | fowl, i'm not saying it's super hard or something that engine designer's shouldn't do... only that it's not a great solution for those looking to quickly hack together functional code. |
22:22:16 | * | gsingh93_ joined #nimrod |
22:22:26 | Araq | filwit: not saying that it is better solution, but keeping things separate works too |
22:22:30 | Jehan_ | filwit: You don't need to convince me that it's desirable (I've written too much Eiffel code to think otherwise). Just that it's not all THAT valuable. |
22:22:50 | Araq | so you don't even have the seq[GUI] in the first place |
22:22:58 | filwit | Jehan_: okay. That sounds reasonable. I'm just wanted to know if you had a better solution I wasn't aware of for this. Thanks for the clarification. |
22:23:52 | fowl | if wordcount was nickels you would be BALLIN filwit |
22:24:09 | fowl | i cant argue with paragraphs |
22:24:22 | filwit | Araq: yes, I agree for many things this is a better design patter (see fowl's comments about component systems), but it requires design on top of the structure's your trying to write (so bad for quickly hacking things together... at least for some). |
22:24:36 | Jehan_ | filwit: You can also use views to emulate multiple inheritance. They have the benefit that they work after the fact, too. |
22:25:04 | filwit | fowl: you can quote sections of my statements and respond accordingly |
22:25:13 | Araq | filwit: I'd go for the "immediate mode UI" anyway |
22:25:19 | fowl | gui isnt hard, i write a specialized version for every project i do |
22:25:27 | * | Varriount quit (Quit: Leaving) |
22:25:49 | Jehan_ | I.e. basically the adapter pattern integrated with the type system. |
22:25:50 | fowl | display some info, handle clicks, anything else can be added later |
22:27:25 | filwit | Araq, fowl: this also stems beyond GUI (in fact I probably wouldn't use MI for a GUI anyways...). However, it's still a commonly understood design pattern that many are familiar with that is pretty much the "cheapest" (in terms of code required) way to construct sane software. |
22:27:26 | EXetoC | Araq: you weren't convinced by the counter arguments provided in this channel before? |
22:27:37 | EXetoC | I don't know nothing about GUI approaches btw |
22:28:19 | filwit | Araq, fowl: at the end of the day, i agree with Jehan_ of course, that most of the time (with generic code) MI is not needed really. |
22:28:26 | Jehan_ | filwit: I think what you're seeing here is a difference in programming paradigm. |
22:28:50 | Araq | EXetoC: I've used both. For games I'd always use the immediate mode UI. it's a pleasure to debug. |
22:29:19 | Jehan_ | An OCaml programmer, for example, would likely be mystified by the desire for multiple inheritance (and even single inheritance for the most part). |
22:29:29 | * | Varriount joined #nimrod |
22:29:38 | * | brson quit (Ping timeout: 255 seconds) |
22:30:31 | Jehan_ | OCaml does have objects and classes, but they see very little use. |
22:30:59 | filwit | Jehan_: my intentions with most of the "points" I raise here is about Nimrod's adoption, and my ability to advertise it. OCaml devs may have fine "work arounds" but that language is far from as popular as Java/C++/C# in terms of popularity, and all of those support a common design pattern Nimrod does not. |
22:31:15 | Jehan_ | filwit: I understand that. |
22:31:54 | Jehan_ | filwit: But Nimrod is in many respects different from Java/C++/C#, often intentionally so. I'm not sure how much you can hide that. |
22:31:56 | flaviu1 | I like Scala's multiple inheritance |
22:32:56 | Jehan_ | flaviu1: The only real issue with MI is to implement it efficiently. |
22:33:18 | flaviu1 | Yeah, scala's version had a interface call for each trait you stacked on |
22:33:19 | Jehan_ | Lots of people got scared of it because C++ managed to screw it up. |
22:33:24 | filwit | Jehan_: actually, that's what makes Nimrod so great is that, yes, it's core solutions are different (often more "ground level") than the big-3, but it's meta-programming allows for us to support all of their paradigms as well, with very similar syntax. |
22:34:10 | * | saml_ joined #nimrod |
22:34:11 | Jehan_ | filwit: I'm not sure that implementing a different language on top of Nimrod is the best idea. It reminds me of many of the problems LISP always had when that happened. :) |
22:34:43 | Araq | hi saml_ welcome |
22:34:51 | filwit | Jehan_: well it's not so much "different language" but "look, you want OOP? Just `import oop`" |
22:35:03 | saml_ | in Araq . i've been waiting for you |
22:35:04 | Jehan_ | Part of the problem with understanding LISP code is that you often have to understand several different sublanguages, few of them documented. |
22:35:19 | Jehan_ | filwit: Yeah, but new syntax, new programming model and conventions ... |
22:35:45 | fowl | its not new syntax |
22:36:31 | fowl | nvm im not paying attention tow hat yall are saying |
22:36:34 | Jehan_ | Having methods with an implicit this parameter qualifies, I think. |
22:36:36 | filwit | Jehan_: yeah, I would agree somewhat (even though I think standards *should* be up to third parties as much as possible), but in this case OOP is such a commonly understood paradigm it makes sense. |
22:36:38 | dom96 | good night |
22:36:58 | Mat3 | Jehan_: Can you please be more problems, which problems you see with ML on top of Lisp ? |
22:37:14 | Mat3 | ^problems=precise, sorry |
22:37:25 | Jehan_ | Mat3: Huh? I wasn't talking about ML on top of LISP. |
22:37:37 | Mat3 | ML = Meta Language |
22:38:16 | Mat3 | or application specific language |
22:38:16 | Jehan_ | Mat3: The problem is that these programs are a pain to understand and maintain. |
22:38:34 | filwit | brb |
22:38:37 | Jehan_ | For relatively little benefit in general. |
22:38:49 | Mat3 | doesn't this depend on the language implemented ? |
22:39:27 | Mat3 | I mean one can implement all kind of languages on lisp (even Scheme) |
22:39:29 | Mat3 | ^on=in |
22:39:31 | Jehan_ | Mat3: Yes, but try dealing with half a dozen slightly different LISP dialects in the same program. |
22:39:54 | flaviu1 | Basically why java is so successful |
22:40:16 | flaviu1 | The cost of doing clever stuff like that is so high no one does it |
22:40:19 | Jehan_ | There's a famous bonmot that every language eventually implements a LISP engine. |
22:40:20 | Araq | I've heard that a lot. IME the average Java program is so far worse it's not funny |
22:40:24 | Mat3 | just study there declaration, I really see no problem in doing that |
22:40:29 | Jehan_ | A little known corollary is that this applies to LISP, too. :) |
22:41:04 | Jehan_ | Mat3: Once you're dealing with programs north of a few hundred KLOC, it's not that easy anymore. |
22:41:33 | Araq | flaviu1: most Java code I've seen is unmaintainable. |
22:41:51 | flaviu1 | Araq: I've had much better experiences |
22:42:22 | flaviu1 | I haven't done anything with heavy reflection though |
22:42:24 | Jehan_ | Araq: The problem with Java is the opposite, i.e. that the language was so feature poor initially that all kinds of workarounds via design patterns became common practice. |
22:43:02 | flaviu1 | Enums as singletons, haha |
22:43:10 | Jehan_ | Overuse of design patterns for trivial stuff was a general fad, too. |
22:43:10 | Araq | if you have more than a few hundred KLOC, you've done it wrong, Jehan_ ;-) |
22:43:31 | Jehan_ | Araq: I wish. Some programs simply are that big. |
22:43:55 | flaviu1 | And then Scala came around, and programs in scala were half the size of Java, but they sometimes overused operators. |
22:44:03 | Jehan_ | One of my least favorite jobs involved a C++ codebase with 1.5 million LOC (in the early/mid-aughts). |
22:44:08 | flaviu1 | Also, allocated like crazy and were sometimes very slow |
22:44:55 | * | hoverbear joined #nimrod |
22:45:02 | Jehan_ | While that codebase had its problems, there really wasn't much you could have done to slim it down. |
22:45:49 | flaviu1 | Araq: It seems that I can't have `#` in my backticks, but I guess thats a small price to pay for the added implementation ease |
22:46:59 | Mat3 | Jehan_: That argument applies to every language I think, anyhow I see your point (and only think a good solution for it is simply structured programming in general) |
22:48:03 | Jehan_ | Mat3: I wasn't arguing against languages. I was arguing against the temptation to build new and shiny DSLs on top of languages just because they allow it. |
22:48:37 | Jehan_ | With great power comes great responsibility and all that. :) |
22:48:55 | Jehan_ | I *like* metaprogramming capabilities. |
22:49:16 | Jehan_ | But they can also be dangerous for software maintenance if people go overboard with them. |
22:49:26 | OrionPK | hola |
22:49:44 | Araq | good DSL design simply has to be learned like good API design |
22:49:47 | flaviu1 | Jehan_: I agree with you, but how can programming languages allow flexibility while making going overboard on the flexibility difficult? |
22:50:10 | Araq | I can't see yet that it's inherently harder to do than good API design |
22:50:12 | Jehan_ | flaviu1: They can't. I was preaching self-restraint. |
22:50:19 | Varriount | flaviu1: By using social taboos |
22:50:45 | Jehan_ | Araq: You haven't seen what custom defmacro variants can do. :) |
22:50:45 | flaviu1 | That works, but a special few will feel they are too special to abide by the taboos |
22:50:46 | Varriount | Eg, shun those who misuse powerful constructs |
22:50:57 | Araq | and nobody argues we shouldn't use APIs in large software systems |
22:51:03 | * | hoverbea_ joined #nimrod |
22:51:36 | flaviu1 | Varriount: In free time programming, you have that choice, not always |
22:51:43 | Araq | people go overboard with it *because* they couldn't do it all before |
22:52:12 | Jehan_ | Araq: Eh, the seasoned LISP programmers are often the worst. :) |
22:53:10 | Mat3 | Jehan_: That's the philosophy behind Lisp (this and factoring - a strategy for for avoiding the problems you mentioned) |
22:54:15 | * | hoverbear quit (Ping timeout: 260 seconds) |
22:54:24 | Mat3 | Lisp programs are often applicative languages which implicitly solve a range of related problems |
22:55:14 | Mat3 | (or generators for solving them) |
22:57:42 | Mat3 | like different mathematical notations are useful for abstraction purposes |
22:59:29 | Mat3 | get some sleep, ciao |
22:59:33 | * | Mat3 quit (Quit: Verlassend) |
23:01:04 | Jehan_ | Sleep sounds like a good idea. See you around. :) |
23:01:08 | * | Jehan_ quit (Quit: Leaving) |
23:01:57 | Araq | same here, bye |
23:02:05 | Varriount | Goodnight |
23:05:26 | dom96 | hrm, looks like we got mentioned on r/programmingcirclejerk |
23:05:39 | Varriount | ? |
23:05:59 | Varriount | I'm assuming, due to the subreddit, that it wasn't anything good? |
23:06:04 | dom96 | not really |
23:06:08 | dom96 | but meh |
23:06:18 | dom96 | even bad PR is good PR :P |
23:06:20 | dom96 | good night for reals |
23:06:45 | flaviu1 | But they're making fun of wikipedia more than nimrod |
23:08:29 | flaviu1 | [On swift] "I liked it before all the plebs started talking about it. Now I'm considering switching to nimrod. " |
23:09:41 | Varriount | Ouch. |
23:10:09 | Varriount | flaviu1: I just hide anything that mentions swift in my reddit client (I browse reddit mainly through an android app) |
23:11:47 | flaviu1 | Varriount: Great idea, RES does that too |
23:13:40 | Varriount | dom96: I'm experiencing closure-ception. I have a closure which calls a procedure with itself to schedule itself somewhere else. |
23:19:08 | flaviu1 | dom96: You should see how much they're bashing on Go |
23:20:28 | * | xtagon quit (Excess Flood) |
23:21:41 | * | xtagon joined #nimrod |
23:28:30 | * | hoverbea_ quit () |
23:35:15 | filwit | back, read the reddit thing "You definately should. It has whole program dead code elimination. How anyone could code before this is beyond me." |
23:35:41 | filwit | am i missing something here? |
23:36:01 | filwit | i thought the jokes would where supposed to be clever |
23:37:10 | filwit | but picking one of the few more 'minor' benefits of Nimrod and acting like that's it isn't clever at all... maybe i'm missing the point of this subreddit tho |
23:38:00 | Varriount | I guess the humor is in pointing out a seemingly minor or worthless feature that nonetheless is touted as a clever feature? |
23:38:03 | filwit | oh damn, Araq left before i could question him about his GC design.. |
23:39:09 | filwit | Varriount: i guess.. though it's not even a hugely brought up thing |
23:39:16 | * | xenagi joined #nimrod |
23:39:41 | filwit | guess it's listed on the front page, so some people (who are looking for things to complain about) would take it wrong? |
23:41:05 | filwit | either way, pretending Nimrod *doesn't* have excellent performance and an excellent GC (and assuming that's not relevant) is just a silly joke. |
23:41:43 | filwit | If he wanted to make a better one, he should have mentioned the need for forward-declaration or something, idk.. |
23:41:56 | Varriount | I mean, if I were to point out nimrod's biggest flaw, it would either be a fact we couldn't directly control (like public mindshare) or some architectural choice (like unconventional threading model) |
23:42:12 | filwit | i would respond but I feel like I would just come off as "that guy" in the wrong subreddit.. |
23:42:23 | Varriount | Even then, it's easier to poke fun at Java |
23:42:29 | filwit | yeah |
23:43:20 | filwit | meh, at least Nimrod is big enough to where 8 people on some random subreddit knew it enough to get the joke. |
23:43:43 | Varriount | Even if it is a rather humorless one. |
23:44:09 | Varriount | (By the way, can humor also be spelled "humour"?) |
23:45:29 | filwit | (dunno, I only speak americanrish) |
23:49:18 | * | Demos joined #nimrod |
23:49:30 | * | darkf joined #nimrod |