00:00:20 | * | darkf_ joined #nimrod |
00:00:59 | * | armin1 joined #nimrod |
00:01:32 | * | reloc0 quit (Disconnected by services) |
00:01:33 | * | clone1018 joined #nimrod |
00:01:35 | * | armin1 is now known as reloc0 |
00:02:11 | * | asterite1 quit (Quit: Leaving.) |
00:02:39 | * | Jessin joined #nimrod |
00:03:24 | * | clone1018_ quit (Read error: Connection reset by peer) |
00:03:24 | * | xenagi quit (Ping timeout: 240 seconds) |
00:04:06 | * | mal`` quit (Ping timeout: 240 seconds) |
00:04:07 | * | Jesin quit (Ping timeout: 240 seconds) |
00:04:18 | * | darkf quit (Ping timeout: 240 seconds) |
00:04:18 | * | Amrykid quit (Ping timeout: 240 seconds) |
00:04:19 | * | Demos quit (Ping timeout: 240 seconds) |
00:04:56 | * | Amrykid joined #nimrod |
00:05:09 | flaviu | Wow, crypto hashes are pretty crap in performance closer to 0 bytes |
00:05:50 | flaviu | At 8 bytes, they require ~100-500 cycles per byte |
00:06:13 | flaviu | Still benchmark later, of course |
00:06:15 | flaviu | http://bench.cr.yp.to/results-sha3.html |
00:06:28 | * | Demos joined #nimrod |
00:07:49 | Araq | dom96: so is c2nim an "app"? |
00:08:01 | Araq | or a "binary"? or both? |
00:08:04 | * | Demos_ joined #nimrod |
00:09:56 | * | Roin quit (Ping timeout: 240 seconds) |
00:09:56 | * | Demos quit (Ping timeout: 240 seconds) |
00:09:57 | * | mal`` joined #nimrod |
00:13:54 | dom96 | Araq: both |
00:14:37 | Araq | so both tag then? |
00:14:39 | * | XAMPP-8 joined #nimrod |
00:17:27 | * | CARAM_ quit (Ping timeout: 260 seconds) |
00:17:31 | dom96 | Araq: Sure. I don't enforce the tags to be consistent. |
00:18:06 | * | vendethiel- joined #nimrod |
00:18:44 | Araq | is my PR automatically updated? |
00:18:51 | flaviu | Araq: Yes |
00:18:55 | * | vendethiel quit (Ping timeout: 260 seconds) |
00:19:01 | flaviu | Any commits to that branch get added to the PR |
00:19:10 | Araq | nice |
00:20:47 | * | clone1018 quit (Ping timeout: 260 seconds) |
00:20:56 | * | TylerE quit (Ping timeout: 260 seconds) |
00:21:16 | * | clone1018 joined #nimrod |
00:21:43 | * | CARAM_ joined #nimrod |
00:22:42 | * | TylerE joined #nimrod |
00:28:09 | * | darkf_ is now known as darkf |
00:29:27 | * | boydgreenfield joined #nimrod |
00:30:11 | Araq | hmm I just got an idea |
00:30:50 | Araq | the parser should special case @`+` |
00:31:08 | Araq | this is an infix operator @ that takes `+` |
00:31:12 | Araq | so we can write |
00:31:18 | Araq | a @`+` b |
00:31:46 | * | armin1 joined #nimrod |
00:32:18 | Araq | and the @ is a proc that lifts `+` to sequences, like 'map' |
00:32:41 | * | reloc0 quit (Disconnected by services) |
00:32:46 | * | armin1 is now known as reloc0 |
00:32:51 | Araq | so instead of map(`+`, a, b) we get a @`+` b |
00:32:58 | * | krusipo_ joined #nimrod |
00:33:01 | Araq | opinions? |
00:33:18 | * | milosn_ joined #nimrod |
00:34:23 | Demos_ | kinda neat |
00:34:50 | flaviu | Like http://stackoverflow.com/a/8001065/2299084 "Placeholder syntax"? |
00:35:14 | dom96 | I don't get this |
00:35:54 | * | Roin joined #nimrod |
00:36:02 | * | reactormonk_ joined #nimrod |
00:36:17 | dom96 | I don't think 'map(`+`, a, b)' is a correct usage of map |
00:36:41 | * | saml_ joined #nimrod |
00:36:58 | * | Raynes_ joined #nimrod |
00:37:17 | Araq | dom96: it's a binary map then or whatever, I'm sure you get the idea |
00:37:42 | * | Jessin quit (*.net *.split) |
00:37:44 | * | phI||Ip quit (*.net *.split) |
00:37:44 | * | reactormonk quit (*.net *.split) |
00:37:44 | * | Skrylar quit (*.net *.split) |
00:37:44 | * | milosn quit (*.net *.split) |
00:37:45 | * | Raynes quit (*.net *.split) |
00:37:45 | * | comex quit (*.net *.split) |
00:37:45 | * | tumak_ quit (*.net *.split) |
00:37:45 | * | krusipo quit (*.net *.split) |
00:37:52 | * | Raynes_ is now known as Raynes |
00:37:53 | * | Raynes quit (Changing host) |
00:37:53 | * | Raynes joined #nimrod |
00:37:54 | dom96 | Araq: Can you show me an example in terms of sequences? |
00:38:16 | flaviu | Araq: Is "Placeholder syntax" at the link somewhat like you're saying? |
00:38:25 | * | Skrylar joined #nimrod |
00:38:36 | boydgreenfield | Any: What version of the compiler should I be using if I want to make use of threadpool and spawn? I’m on v0.9.5 (6/27) on my local machine, and have something working perfectly, but on a remote server v0.9.5 (6.29) I get: lib/pure/concurrency/threadpool.nim(64, 25) Error: undeclared identifier: 'fence' |
00:38:43 | * | phI||Ip joined #nimrod |
00:38:43 | boydgreenfield | (while compiling) |
00:39:26 | * | comex joined #nimrod |
00:40:14 | boydgreenfield | Ok… nevermind. I missed the threads option. My bad. |
00:40:48 | * | Jessin joined #nimrod |
00:40:50 | * | tumak joined #nimrod |
00:41:25 | * | q66 quit (Quit: Leaving) |
00:44:41 | * | Mathnerd626 joined #nimrod |
00:45:03 | * | Mathnerd626 quit (Read error: Connection reset by peer) |
00:51:49 | * | lorxu quit (Ping timeout: 240 seconds) |
00:52:01 | * | lorxu joined #nimrod |
00:53:30 | * | superfunc joined #nimrod |
00:56:04 | * | ARCADIVS joined #nimrod |
00:58:43 | * | lorxu quit (Ping timeout: 240 seconds) |
01:03:15 | Araq | boydgreenfield: ok, so now I know people are already using threadpool |
01:03:23 | Araq | interesting |
01:04:21 | boydgreenfield | Araq: Well, I just did for the first time. Should I be holding off? |
01:04:57 | Araq | the API is stable now afaict |
01:05:11 | Araq | it'll get more features though |
01:05:34 | mmatalka | is it going to be a copy of TPL? |
01:06:53 | Araq | no |
01:22:35 | flaviu | Araq: If you're still here, I'm curious why the compiler needs to go through the hassle of bootstrapping. Isn't it possible to just compile the c sources and avoid bootstrapping? |
01:23:24 | Araq | the c sources are platform specific |
01:23:36 | Araq | but yes, that's what ./build.sh does |
01:23:50 | superfunc | Does anybody know of a clean way to design a general menu system for games? I have some ideas, but none are very clean. |
01:24:17 | flaviu | superfunc: Not suggesting anything, but I think some games use html |
01:24:50 | superfunc | Ah, I mean more for control flow, I already have a pretty slick idea for rendering |
01:25:19 | flaviu | Araq: Although I'm sure you've considered it, couldn't the `when` statements be translated to C for it to deal with? |
01:25:38 | Araq | it's not only the 'when' |
01:25:41 | flaviu | superfunc: Oh, control flow is easy. Just have an array of function pointers and call each each frame |
01:26:14 | Araq | but the 'when' is perhaps the most problematic feature for it |
01:26:30 | Araq | but what's the gain? |
01:26:44 | flaviu | You don't ever have to worry about compiler depenencies |
01:26:59 | Araq | yeah right |
01:27:13 | superfunc | A friend of mine just mentioned a FSM for it, I'll consider the tradeoffs between the two ideas. Thanks flaviu |
01:27:28 | Araq | because C has not to worry about that ... wait a sec? what do you mean autoconf?! |
01:28:06 | flaviu | Oh, you're right. I didn't consider that different system even on the same architecture have different abilities |
01:28:58 | Araq | btw I don't worry about compiler deps |
01:29:13 | Araq | I can easily make these work too |
01:29:22 | Araq | I prefer not to though |
01:33:46 | flaviu | Also, has nimrod been tested on big endian machines? |
01:34:27 | flaviu | I'm trying to set up one in an emulator, and I don't know if things will work |
01:35:21 | flaviu | QEMU is so slow that even bare-bones linux takes forever to boot |
01:35:36 | Araq | it works on big endian machines |
01:35:58 | Araq | we regularly run it on powerpc which is big endian, I think |
01:36:39 | flaviu | IIRC its biendian |
01:37:23 | superfunc | ^ |
01:38:16 | flaviu | Seems the bigger issue is finding a compiler |
01:38:35 | flaviu | GCC supports mips64, but it seems arch doesn't have a copy for that |
01:50:05 | Araq | good night |
01:50:26 | superfunc | night araq |
01:53:48 | * | boydgreenfield quit (Quit: boydgreenfield) |
01:54:59 | * | superfunc quit (Ping timeout: 264 seconds) |
01:59:18 | * | brson joined #nimrod |
02:17:11 | * | Nimrod quit (Ping timeout: 264 seconds) |
02:18:14 | * | Nimrod joined #nimrod |
02:21:53 | * | Jessin quit (Quit: Leaving) |
02:26:43 | * | brson quit (Ping timeout: 240 seconds) |
02:28:02 | * | brson joined #nimrod |
02:30:27 | * | Jesin joined #nimrod |
02:33:13 | * | xtagon joined #nimrod |
02:53:47 | * | shevy quit (Ping timeout: 264 seconds) |
03:01:19 | flaviu | qemu is slow |
03:01:26 | flaviu | I guess I'll have to leave it overnight |
03:11:17 | * | lorxu joined #nimrod |
03:26:37 | * | boydgreenfield joined #nimrod |
03:27:17 | boydgreenfield | Another babel question, sorry. Any way to force it to take from the head of a git repo? (As opposed to the last tag?) |
03:28:05 | * | brson quit (Quit: leaving) |
03:32:24 | flaviu | boydgreenfield: Its actually a new feature |
03:32:37 | boydgreenfield | flaviu: How so? |
03:32:45 | flaviu | Let me see if I can find the blog post |
03:33:10 | boydgreenfield | flaviu: / I added a tag, and am now getting this error (https://github.com/nimrod-code/babel/blob/509eff97a3f590a8b06af774d772e21d3bc3df06/src/babelpkg/download.nim#L155), despite the version actually being in range |
03:33:38 | flaviu | http://picheta.me/articles/2014/06/babel--nimrods-package-manager.html |
03:33:45 | * | Mathnerd626 joined #nimrod |
03:33:49 | flaviu | `babel install commandeer@#26b6c035b6c` |
03:34:42 | flaviu | I'm not actually very familiar with babel, I just remember dom mentioning that |
03:34:52 | boydgreenfield | flaviu: Can one specify that in a .babel file though? |
03:34:55 | * | Mathnerd626 quit (Read error: Connection reset by peer) |
03:35:11 | boydgreenfield | I’ll open an issue – appears to be a bug in the getPackageInfo() call (may be excluding the last commit) |
03:35:35 | flaviu | I think so. set the version to "#6593562364" or whatever |
03:36:10 | flaviu | The sample on that page says `Requires: "nimrod >= 0.9.4, commandeer > 0.1, https://github.com/runvnc/bcryptnim.git#head"` is valid |
03:37:51 | * | Mathnerd626 joined #nimrod |
03:38:56 | boydgreenfield | Requires “ssh…#head” works fine, but there’s some error with looking for a tag that’s also the head. I’ll try to track down exactly where. |
03:40:16 | * | saml_ quit (Quit: Leaving) |
03:51:03 | * | Mathnerd626 quit (Read error: Connection reset by peer) |
03:53:13 | * | boydgreenfield quit (Quit: boydgreenfield) |
04:00:11 | * | Mathnerd626 joined #nimrod |
04:21:37 | * | Mathnerd626 quit (Remote host closed the connection) |
04:22:49 | * | XAMPP-8 quit (Ping timeout: 240 seconds) |
04:25:47 | * | XAMPP-8 joined #nimrod |
04:45:49 | * | XAMPP-8 quit (Ping timeout: 240 seconds) |
04:48:02 | * | XAMPP-8 joined #nimrod |
05:12:20 | * | Demos_ quit (Read error: Connection reset by peer) |
05:13:12 | * | xenagi|2 quit (Quit: Leaving) |
05:27:07 | * | XAMPP-8 quit (Ping timeout: 240 seconds) |
05:28:56 | * | xtagon quit (Ping timeout: 248 seconds) |
05:44:16 | * | terrydog101 joined #nimrod |
05:52:13 | * | terrydog101 quit (Quit: Bye) |
06:17:47 | * | flaviu quit (Ping timeout: 264 seconds) |
06:42:20 | * | io2 joined #nimrod |
07:39:25 | * | CARAM_ quit (Changing host) |
07:39:25 | * | CARAM_ joined #nimrod |
07:39:28 | * | TylerE quit (Changing host) |
07:39:28 | * | TylerE joined #nimrod |
07:41:13 | * | BitPuffin quit (Ping timeout: 248 seconds) |
08:07:54 | * | BitPuffin joined #nimrod |
08:12:24 | * | BitPuffin quit (Ping timeout: 260 seconds) |
08:41:15 | * | kunev joined #nimrod |
08:59:43 | * | io2 quit () |
09:08:18 | * | BitPuffin joined #nimrod |
09:12:37 | * | johnsoft quit (Read error: Connection reset by peer) |
09:13:12 | * | johnsoft joined #nimrod |
09:13:30 | * | BitPuffin quit (Ping timeout: 255 seconds) |
09:30:30 | * | Matthias247 joined #nimrod |
09:42:23 | * | noam quit (Ping timeout: 264 seconds) |
09:51:15 | * | Fr4n joined #nimrod |
10:09:03 | * | BitPuffin joined #nimrod |
10:09:08 | * | Amrykid quit (*.net *.split) |
10:10:27 | * | Amrykid joined #nimrod |
10:13:50 | * | BitPuffin quit (Ping timeout: 240 seconds) |
10:19:51 | * | io2 joined #nimrod |
10:32:37 | * | q66 joined #nimrod |
10:32:44 | * | ARCADIVS quit (Ping timeout: 240 seconds) |
10:57:11 | * | silven joined #nimrod |
11:00:47 | * | ARCADIVS joined #nimrod |
11:09:50 | * | BitPuffin joined #nimrod |
11:14:20 | * | BitPuffin quit (Ping timeout: 240 seconds) |
11:26:28 | * | BitPuffin joined #nimrod |
12:07:42 | * | BitPuffin quit (Ping timeout: 245 seconds) |
12:09:00 | * | io2 quit (Ping timeout: 260 seconds) |
12:20:22 | * | BitPuffin joined #nimrod |
12:23:32 | * | untitaker quit (Ping timeout: 245 seconds) |
12:29:47 | * | Araq is now known as araq |
12:30:16 | * | untitaker joined #nimrod |
12:31:50 | araq | ping Varriount |
12:32:50 | * | darkf quit (Quit: Leaving) |
12:35:43 | * | BitPuffin quit (Ping timeout: 244 seconds) |
12:36:05 | * | ARCADIVS quit (Quit: WeeChat 0.4.3) |
12:39:18 | dom96 | araq: you're lowercase again |
12:44:07 | * | araq is now known as Araq |
12:46:53 | * | kunev quit (Ping timeout: 252 seconds) |
12:48:19 | * | io2 joined #nimrod |
12:49:40 | def- | const arrays have to start at 0? |
12:54:48 | Araq | const foo: array[3..5, int] = [3,4,5] # should work |
12:55:42 | def- | Araq: does not i think |
12:56:46 | def- | with let it works |
13:05:23 | Araq | well the compiler itself uses it so I'm puzzled |
13:05:29 | Araq | bug report please |
13:05:42 | def- | Araq: exactly, i was also wondering when i saw that in the compiler |
13:05:53 | def- | I'll open an issue |
13:24:30 | * | Mathnerd626 joined #nimrod |
13:43:15 | * | flaviu joined #nimrod |
14:02:04 | * | Jehan_ joined #nimrod |
14:05:56 | * | BitPuffin joined #nimrod |
14:10:18 | * | Mathnerd626 quit (Ping timeout: 240 seconds) |
14:30:44 | * | asterite joined #nimrod |
15:08:43 | Araq | hi asterite how's crystal? |
15:09:33 | Araq | def-: ah that's a known bug with the const eval engine |
15:09:47 | Araq | if carr[4] is not evaluated at compile-time it works ... |
15:09:53 | asterite | Hi Araq. Good, we are playing with macros :) |
15:10:56 | Araq | how about abandoning this "we want a compiled Ruby" bullshit and helping us instead? ;-) |
15:12:42 | asterite | Hehehe :-P |
15:13:26 | asterite | how about abandoning this "i put types everywhere" bullshit and helping us instead? ;) |
15:13:29 | def- | Araq: aah, ok |
15:15:11 | Jehan_ | Heh. :) |
15:15:16 | * | Trustable joined #nimrod |
15:16:07 | * | io2 quit () |
15:17:35 | Araq | asterite: ah yeah ... these crazy types, much better to conflate every feature mankind ever invented and call it a 'class' :P |
15:19:47 | asterite | Someone told me there was nothing else after 'class', and I believed him |
15:20:00 | asterite | But then I found 'struct', 'module' and some other stuff |
15:22:50 | * | kunev joined #nimrod |
15:24:59 | Jehan_ | I'm probably some weird freak in that I like both dynamically (esp. Ruby) and statically typed languages. |
15:25:12 | Araq | yup. indeed. |
15:25:35 | * | Jehan_ sulks in a corner. |
15:25:41 | Araq | but maybe I programmed in the wrong dynamic languages |
15:26:14 | Araq | Lua, Python, Lisp, Smalltalk ... hrmm I don't think so |
15:26:15 | Jehan_ | Well, I did write an Eiffel compiler in Ruby (okay, with a parser/lexer in PCCTS). |
15:26:25 | asterite | I like both, I just don't like the speed of dynamic languages and the lack of feedback when compiling (because… no compiling :-P) |
15:27:24 | Jehan_ | It depends on the application domain and what constraints you're dealing with. |
15:28:08 | Jehan_ | For example, code in interpreted languages is generally easier to deploy. |
15:28:26 | Matthias247 | I'm probably the only one who finds programming in dynamic languages more time-consuming than in static languages ;) |
15:28:55 | Araq | Matthias247: no, it's objectively slower IMHO |
15:29:41 | Araq | Jehan_: I find compiled code easier to deploy, usually |
15:29:45 | Jehan_ | Matthias247: Really depends on what you're doing and what constraints you're dealing with. |
15:30:49 | Jehan_ | Araq: Depends on what you can assume about the target environment. |
15:31:54 | Jehan_ | Bootstrapping the necessary environment for most compiled languages can be complicated, time-consuming, or both. |
15:32:44 | Jehan_ | Nimrod is the big exception here and it's one of Nimrod's major attractions for me. |
15:32:45 | Araq | in what kind of environment can I assume Python 3 is the default but do know nothing about the CPU architecture? |
15:33:04 | Jehan_ | Then don't write Python 3 code. :) |
15:33:56 | Araq | how is Ruby or others better in this respect? |
15:36:19 | Jehan_ | Building Python or Ruby from scratch should take a couple of minutes, tops. |
15:38:01 | * | io2 joined #nimrod |
15:39:04 | Araq | we have different opinions about what it means to "deploy" then ;-) |
15:39:37 | Jehan_ | Araq: I'm mostly concerned with platforms that can in the worst case be pretty barebones. |
15:39:54 | Jehan_ | Or have really, really outdated stuff. |
15:40:08 | Jehan_ | It's sadly not unusual for HPC environments. |
15:43:19 | asterite | In a compiled language deploying can't be just dropping an executable? |
15:43:25 | Jehan_ | I also sometimes need to write code that runs on whatever a particular user has at home. |
15:43:27 | asterite | (in -> with) |
15:44:21 | Jehan_ | asterite: Try cross-compiling on your laptop for a Cray XE? |
15:45:24 | asterite | I don't deploy on that computer that often :) |
15:45:50 | Jehan_ | Yeah. As I said, I'm dealing with various HPC environments on a not so infrequent basis. :) |
15:46:11 | Jehan_ | And several of them can really turn your notion of portability upside down. |
15:46:41 | * | asterite quit (Quit: Leaving.) |
15:48:48 | Araq | hmm wrk includes LuaJIT |
15:49:16 | Araq | fatal error: openssl/ssl.h: Datei oder Verzeichnis nicht gefunden |
15:49:17 | Araq | #include <openssl/ssl.h> |
15:50:44 | * | Demos joined #nimrod |
16:01:48 | Jehan_ | LuaJIT can be a bit finicky, especially because it doesn't use something like autoconf. |
16:02:55 | Jehan_ | For what it's worth, I've been using a stripped down Lua in the past to do autoconf-like stuff without having to deal with /bin/sh as the lowest common denominator. |
16:03:37 | Demos | One of the things that attracted me to nimrod was that you could just do autoconf stuff using CTFE and slurp/gorge |
16:04:12 | Jehan_ | Demos: At the cost of driving up compile time, though. |
16:04:23 | Jehan_ | Better to write a configure script in Nimrod. |
16:06:59 | Demos | I suspect that autotools/make/cmake/whatever is already really slow |
16:07:32 | Demos | besides you get a whole lot of the benifit of autotools with just modules and nimrod's runtime shared library loading scheme |
16:08:59 | Jehan_ | Demos: It's slow, but that's primarly because of running the C compiler for every single feature. |
16:09:06 | Jehan_ | And there's no way around that, really. |
16:09:45 | * | joelmo joined #nimrod |
16:10:18 | Demos | right, I also like the fact that building nimrod on windows does not involve black magic and the souls of long dead FSF members |
16:10:59 | Demos | question: how do object varients interact with cyclic data structures? |
16:11:20 | Jehan_ | What do you mean by "interact with"? |
16:11:32 | Jehan_ | And the point about Windows is well-taken. :) |
16:11:36 | Demos | let me post a gist |
16:12:57 | Demos | https://gist.github.com/barcharcraz/8ce838a131b7e3e7c74c |
16:14:40 | Jehan_ | That won't work unless you make it ref object. |
16:14:54 | Demos | the contents of the array I assume |
16:14:59 | Demos | or well it would be the same |
16:15:12 | Jehan_ | It's a data structure that has infinite size. |
16:15:27 | Jehan_ | Or potentially infinite size at least. |
16:15:41 | Demos | yeah, that is what I thought. |
16:15:59 | Jehan_ | You can't really lay that out in memory without pointers, either. |
16:16:09 | Demos | and reflecting on how tagged unions actually work it should have been obvious |
16:16:44 | Jehan_ | Why not make it a ref object? That's what I tend to do by default. |
16:17:27 | Demos | yeah my current code has array[1..3, ref TTest] |
16:17:54 | Demos | I like to allow stack allocation of each node I guess |
16:18:01 | Demos | not that it matters that much |
16:20:22 | * | asterite joined #nimrod |
16:26:44 | * | asterite quit (Quit: Leaving.) |
16:28:41 | * | asterite joined #nimrod |
16:28:41 | * | asterite quit (Client Quit) |
16:40:48 | * | kunev quit (Ping timeout: 255 seconds) |
16:54:06 | * | blamestross quit (Quit: blamestross) |
17:14:20 | * | Mathnerd626 joined #nimrod |
17:14:20 | * | lorxu quit (Read error: Connection reset by peer) |
17:14:27 | * | lorxu joined #nimrod |
17:28:35 | * | lorxu quit (Ping timeout: 264 seconds) |
17:29:37 | * | Jesin quit (Quit: Leaving) |
17:36:08 | Demos | can you iterate through an enum in nimrod? or do you have to cast? |
17:37:31 | fowl | yes iterate |
17:37:45 | Jehan_ | What fowl said. |
17:42:59 | EXetoC | it currently iterates over holes also |
17:55:23 | Jehan_ | Ouch. This must hurt for Mexico. |
17:57:47 | * | asterite joined #nimrod |
18:06:20 | * | BitPuffin quit (Ping timeout: 240 seconds) |
18:06:46 | * | asterite quit (Quit: Leaving.) |
18:15:13 | Varriount | Araq: Hm? Release what? |
18:15:15 | * | asterite joined #nimrod |
18:37:43 | * | ARCADIVS joined #nimrod |
18:45:11 | * | Mathnerd626 quit (Read error: Connection reset by peer) |
18:48:37 | Varriount | Hello asterite, ARCADIVS |
18:53:12 | asterite | Hi Varriount |
19:05:26 | * | asterite quit (Quit: Leaving.) |
19:06:41 | * | asterite joined #nimrod |
19:08:48 | Araq | Varriount: well I wanted to release today |
19:08:57 | Araq | but that would be extremely rushed |
19:09:23 | Varriount | Release... like, an entirely new release? |
19:09:47 | Araq | yeah |
19:10:32 | Varriount | Hasn't it only been like... 3 months since the last release? |
19:11:04 | Varriount | Or are we now aiming for more frequent releases? |
19:11:20 | * | asterite quit (Ping timeout: 260 seconds) |
19:11:24 | Varriount | Or is there some big new feature that needs to be released? |
19:13:19 | Araq | well I thought the async stuff is now stable |
19:13:33 | Araq | but apparently it is not ... |
19:14:11 | Varriount | It has a long way to go. Personally, I'd like it to be actually able to utilize more than one core for IO first. |
19:14:54 | Araq | what about that corruption? you said you detected some wrong GC_unref call? |
19:15:29 | Varriount | That was related to my integration of file monitoring with asyncio |
19:15:33 | Skrylar | well you see |
19:15:39 | Skrylar | he added literate nimrod support lol |
19:15:50 | Varriount | Skrylar: Huh? |
19:16:29 | Skrylar | Varriount: literate programming is when you hate yourself enough to write your program as though it was a paper document with code segments |
19:17:10 | Jehan_ | Skrylar: Say what you want, but literate programming can be pretty darn nice for maintaining code. |
19:17:22 | Skrylar | Jehan_: i *have* toyed around with noweb before :) |
19:17:33 | Skrylar | And org-mode's version of it |
19:18:45 | Skrylar | i think its probably better when you're documenting an algorithm like "how does deflate work" though... i already have a lot of normal comments in a file, and literate tools tend to botch up your other ones |
19:18:45 | Araq | Varriount: yes, well it sounded like a general bug in asyncio |
19:19:02 | Skrylar | other ones = other tools |
19:22:54 | Araq | Varriount: using multithreading with async IO *on windows* is almost impossible for nimrod |
19:23:48 | Jehan_ | Skrylar: In my experience, where literate programming tends to beat comments is at documenting the "big picture". |
19:24:03 | Araq | you have to use multiprocessing instead. |
19:24:40 | Skrylar | Jehan_: i usually resort to asciidoc at that point |
19:25:19 | Skrylar | there was a tool i wanted for nimrod, but i didn't have a hasher back then; it basically read section flags and gave you hash codes, so it could check if your documentation was outdated on a given topic |
19:25:26 | Varriount | Araq: Not if some flexibility is sacrificed... |
19:25:27 | Jehan_ | Skrylar: Yes, my point is not that it cannot be done, just that it tends to happen more naturally. |
19:26:00 | Jehan_ | If you write your code as part of a textual description, it tends to turn out differently than when you write textual descriptions as part of the code. |
19:26:13 | Varriount | Araq: Why do you think it's nearly impossible? |
19:27:02 | Skrylar | Jehan_: i have a bad problem where the source information tends to get botched :/ |
19:27:04 | Araq | because Windows essentially says "ok, this callback can be run on any thread" |
19:27:25 | Araq | this breaks every invariant in the runtime |
19:27:45 | Varriount | Araq: Window's does not tie a callback to its event notifications. |
19:27:53 | Skrylar | Jehan_: it might work better the way one of the R tools does it, where the 'literate' parts are written as comments and the tangle/weave tools operate that way; the classic literate way where code is written as asides tends to ruin error logs |
19:28:00 | Skrylar | "will the real line 30 please stand up?" |
19:28:26 | Jehan_ | Skrylar: Heh. Yes, that's one of the more annoying problems whenever you have code generation. |
19:29:01 | Jehan_ | I am thinking of having the Talis compiler support literate programming natively, which is why that's on my mind. |
19:29:05 | Skrylar | Jehan_: knitr has support for taking lines which are comments out of an R file, and generating the 'literate' markup from that, so your line information is good to the compiler but your markdown gets shoved to a separate file for processing |
19:29:25 | Skrylar | I think Haskell supports direct literate using bird marks |
19:29:28 | * | BitPuffin joined #nimrod |
19:30:03 | Araq | Varriount: we can talk later about it, I'll be back later |
19:30:16 | Varriount | Araq: Sure, I'll be here for the rest of the day. |
19:30:45 | Skrylar | for some reason i've always liked asciidoc more than reST... maybe its just because reST is usually so glued to pythoners |
19:31:39 | Skrylar | adoc is basically DocBook XML converted in to a markup format |
19:31:53 | Varriount | Araq: I think that, at the very least, some redesigning of the notions that asyncdispatch runs on are going to be needed if multi-threaded async io is going to be supported. |
19:32:56 | Jehan_ | The problem with the Haskell style is that it screws up autoindent. |
19:33:15 | Jehan_ | Skrylar: I also prefer asciidoc, I just hate the asciidoc tooling. |
19:34:03 | Jehan_ | Plus, using XML as an intermediate format. Bleh. XML can diaf as far as I'm concerned. |
19:34:41 | Varriount | *die |
19:35:18 | Jehan_ | Varriount: If you are talking to me, diaf was what I meant. :) |
19:36:32 | Skrylar | Jehan_: use asciidoctor.rb |
19:38:10 | Skrylar | i suspect it would not be hard to write a tool which was able to just yank comments out of a file and make every line NOT a comment in to a source code block |
19:38:17 | Jehan_ | Skrylar: I'm not sure how asciidoctor would fix my issues? |
19:38:27 | Jehan_ | It's still targeting docbook XML as far as I know. |
19:38:43 | Skrylar | both the python and ruby variants of asciidoc can produce html directly y'know |
19:39:02 | Jehan_ | I'm not interested in HTML. |
19:39:24 | Jehan_ | I'm interested in getting a PDF without the absurd docbook toolchain. |
19:39:25 | Skrylar | eh. well basically nothing produces TeX |
19:39:34 | Jehan_ | pandoc? |
19:39:39 | Skrylar | pandoc is silly |
19:39:46 | Jehan_ | It also works? |
19:39:51 | Skrylar | no, it doesn't |
19:39:55 | Skrylar | not for anything beyond markdown |
19:40:04 | Jehan_ | Yes, my point exactly. |
19:40:12 | Skrylar | pandoc is a wonderful markdown toolchain. everything outside of markdown is claimed to be supported but is bricked |
19:40:17 | Jehan_ | Which is why in the end I'm frequently using Markdown over Asciidoc. |
19:40:25 | Skrylar | it can't even be bothered to process ..include:somefile when immitating reST |
19:40:45 | Jehan_ | Yeah, my point exactly. |
19:40:56 | Jehan_ | I want pandoc-like functionality, but for Asciidoc. |
19:41:09 | Skrylar | I'm not sure why pandoc doesn't have support for those |
19:41:36 | Skrylar | I don't recall if John was overly strict about changing the internal format or not, but pandoc's internal format lacks a lot |
19:42:27 | Jehan_ | Skrylar: I'm not blaming him. He primarily wanted a tool for Markdown as far as I know, and pandoc handles Markdown extremely well. |
19:43:01 | Jehan_ | Basically, I think that Asciidoc is the superior format with the inferior tooling. |
19:43:17 | Jehan_ | And yes, that means that I should stop being lazy and write something myself. :) |
19:46:06 | Skrylar | i didn't have problems with the docbook chain outside of it being slow |
19:46:17 | Skrylar | and i don't produce book formats *that* often |
19:46:50 | Jehan_ | Skrylar: PDF is what I need most often. Not necessarily in book format. |
19:47:24 | Jehan_ | Presentation, short white papers to circulate. |
19:49:33 | Varriount | Any of you guys know how Erlang deals with asynchronous IO? |
19:52:58 | Jehan_ | Varriount: Mostly, it doesn't. :) |
19:53:08 | Jehan_ | Erlang processes are really, really lightweight threads. |
19:53:26 | Jehan_ | You just have thousands of them, all of them block when I/O happens. |
19:53:42 | Jehan_ | There are 1+ internal threads that do the actual I/O. |
19:54:18 | Jehan_ | In principle, Go uses a similar approach. |
19:56:37 | Jehan_ | The problem is that underneath you (1) need lightweight user-level threads to make things scale and (2) if one of the lightweight threads accidentally does actually blocking I/O, then you may block an entire worker thread. |
19:57:13 | Jehan_ | That was a big problem with Gnu Pth (which basically does the same for single-threaded programs). |
19:57:24 | Matthias247 | that's why they have custom I/O librarys. That will yield instead of block |
19:57:37 | Jehan_ | If any single coroutine in Pth accidentally blocks, the entire program stops. |
19:58:05 | Jehan_ | Matthias247: Yeah, exactly. The problem is that when there's a bug, especially when you call out to external libraries. |
19:58:17 | Matthias247 | it's the same in Go, when you configure it for a single thread |
19:58:54 | Jehan_ | You'd be surprised how many libraries do a blocking DNS lookup, for example. |
19:59:38 | Matthias247 | hmm, I wrapped it in std::async ;) |
20:06:26 | Matthias247 | I'm however still not certain what is really the best approach for concurrency at all |
20:06:56 | Jehan_ | Matthias247: The simple answer is that there is no single best approach? |
20:07:09 | Varriount | There must be a good paradigm/method/concept/way to do asynchronous mult-threaded IO in Nimrod. |
20:07:12 | Matthias247 | probably yes |
20:08:28 | * | Mat3 joined #nimrod |
20:08:38 | Mat3 | Good day |
20:08:41 | Varriount | dom96 wrote asyncdispatch thinking that Nimrod's thread-isolation could be worked around. Even if it can, it will likely be in an akward way. |
20:08:57 | Jehan_ | Varriount: I honestly haven't yet looked at it. |
20:09:35 | Jehan_ | Matthias247: Though the biggest problem is that 99% of all languages still don't protect you against race conditions. |
20:10:06 | Jehan_ | Which is baffling, because the basic solution has been known for over 40 years. |
20:10:22 | Matthias247 | on the one hand the "modern" async API's like Futures/Tasks, streams, RX are quite nice. But for some state driven things I could imagine that actors also have advantages. And for other things the green threads like in Go |
20:11:10 | Varriount | Jehan_: What is the basic solution? |
20:11:20 | Jehan_ | Matthias247: For the most part, these are all pretty similar in that they're basically somewhat higher level wrappers for message sending. |
20:11:23 | Jehan_ | Varriount: Monitors. |
20:11:50 | Jehan_ | You associate a lock with a piece of data. Before accessing the data, the lock must have been acquired. |
20:12:05 | Jehan_ | Depending on your preferences, this can be done statically or dynamically. |
20:12:21 | Matthias247 | Jehan_: I think the control flow is different |
20:13:16 | Jehan_ | Matthias247: Yes, but different in the same way that functional and imperative languages are different. Both still rely on the same machinery at a fundamental level. |
20:13:39 | Jehan_ | It's all CSP with varying types of syntactic sugar, so to speak. |
20:14:00 | Jehan_ | Mind you, I consider syntactic sugar (and other abstractions) to be pretty important. |
20:14:40 | * | joelmo quit (*.net *.split) |
20:15:50 | * | untitaker_ joined #nimrod |
20:16:08 | Matthias247 | In Actors you have one inbox which receives everything in a queue. And your thread's reentry point is always the receive method. While in Futures the future stores an object which can be set by another thread. And that might either restart your thread (with continuations) or you can make a blocking wait on it |
20:17:42 | * | untitaker quit (Ping timeout: 255 seconds) |
20:18:17 | * | joelmo joined #nimrod |
20:19:17 | Jehan_ | Matthias247: You can think of typical future implementations as having an actor that processes (data, closure) pairs. At least in the abstract. |
20:20:34 | Matthias247 | then you would have multiple actors that would work in the same thread on the same data |
20:20:41 | Matthias247 | which would not be actor-like :) |
20:21:14 | Jehan_ | Multiple actors would be an implementation detail. |
20:22:05 | Matthias247 | for me a basic future is a mutex and a condition variable ;) |
20:23:44 | * | io2 quit (Read error: Connection reset by peer) |
20:24:16 | * | Boscop_ joined #nimrod |
20:27:26 | * | silven_ joined #nimrod |
20:28:50 | * | darkfusi1n joined #nimrod |
20:29:11 | * | Boscop quit (Ping timeout: 260 seconds) |
20:29:16 | * | Boscop_ is now known as Boscop |
20:29:39 | * | silven quit (Read error: Connection reset by peer) |
20:29:39 | * | darkfusion quit (Ping timeout: 260 seconds) |
20:31:08 | * | Mat3 quit (Ping timeout: 260 seconds) |
20:31:33 | * | Mat3 joined #nimrod |
20:34:23 | Araq | Varriount: I consider it a minor problem. Afaict Linux doesn't have the problem which is what most people use for servers. And multi-processing for a windows servers is hardly the end of the world |
20:35:24 | Araq | in fact, usually I use multiprocessing instead of threads anyway |
20:36:32 | Varriount | Sigh... |
20:37:09 | Jehan_ | Varriount: Working my way through the asyncio stuff now. What's the precise problem with Windows? |
20:37:28 | Varriount | Jehan_: It's not windows in particular, it's Nimrod. |
20:37:42 | Araq | Varriount: that is simply not true. |
20:37:47 | Jehan_ | Varriount: Hmm, but my understanding was that the problem didn't manifest on Linux? |
20:37:56 | Varriount | Jehan_: Eh.. What? |
20:38:00 | Araq | it's Windows' async IO design |
20:38:08 | Jehan_ | Okay, then I must have misread something earlier? |
20:38:17 | Varriount | Jehan_: It's not a problem, it's a design flaw. |
20:38:37 | Varriount | At present, asyncdispatch doesn't make use of multiple threads. |
20:38:38 | Jehan_ | Okay. What's the precise problem, then? |
20:39:01 | * | pafmaf joined #nimrod |
20:40:25 | Varriount | Due to Nimrod's GC design, and asyncdispatch's callback based approach to asynchronous IO, a callback can only usually run in the thread with which it was created in. |
20:40:51 | Jehan_ | I see that. |
20:40:53 | Varriount | Unless my understanding of Nimrod's threading model is completely askew. |
20:41:04 | Jehan_ | Assuming the callback is a closure, yes. |
20:41:06 | Matthias247 | The Winapi is configurable |
20:41:26 | Jehan_ | How does the callback end up in a different thread? |
20:41:26 | Matthias247 | You CAN configure it to dispatch callbacks on any thread that is available to the OS |
20:41:44 | Matthias247 | that's the APC stuff |
20:42:13 | Matthias247 | but you can also avoid that and use a completion port, which you query from the thread you like |
20:42:17 | Varriount | Yes, asyncdispatch doesn't use that stuff. It uses the GetQueuedCompletionStatus and friends. |
20:42:44 | Matthias247 | I think most implementations do that |
20:42:55 | Matthias247 | Hijacked threads are a pain. The same as unix signal handling ;) |
20:43:26 | Jehan_ | Hmm, looking at asyncio.nim, I'm seeing only an import of select from winlean? |
20:43:28 | Varriount | Matthias247: But most implementations don't have the kind of per-thread memory management that Nimrod has. |
20:43:42 | Varriount | Jehan_: Wrong file. Try asyncdispatch.nim |
20:43:49 | Jehan_ | Varriount: Ah, thanks. |
20:44:09 | Matthias247 | Varriount: I meant most implementations will use the completion port explicetly and will not use APC |
20:47:48 | Varriount | Araq: How is Window's asyncio design the problem? As far as I can see, a future-based callback mechanism for multi-threaded async io would be flawed on any platform. |
20:48:14 | Jehan_ | So, the customOverlapped thing is meant to hold an arbitrary payload? |
20:48:19 | Varriount | Jehan_: Yes. |
20:49:12 | Araq | Varriount: I thought the APC was mandatory |
20:49:18 | Varriount | Araq: Nope. |
20:50:19 | Jehan_ | And because there's a closure stuck inside that can reference a different heap from where you started out, you have problems, right? |
20:50:29 | Varriount | Jehan_: Yes. |
20:51:08 | Jehan_ | Okay, now I have to figure out why the code needs a closure in the first place. |
20:51:20 | Varriount | Jehan_: The closure is a callback. |
20:51:34 | Jehan_ | Yes. But you can do callbacks without closures, too. |
20:51:50 | Jehan_ | I'm trying to understand the underlying design reason for having a closure there. |
20:52:02 | Varriount | Jehan_: You mean, stateless callbacks? |
20:52:25 | Jehan_ | Or one with state in a place that is safe to use. |
20:53:54 | OrionPK | hey araq |
20:54:04 | Araq | hi OrionPK |
20:54:16 | OrionPK | what did u need sha1 for |
20:54:38 | Araq | for my experimental compiler branch |
20:55:11 | OrionPK | for what purpose though? out of curiosity |
20:55:32 | Araq | I'll hash types and ASTs into some sha1 and append that to the generated symbols to make C code generation much more deterministic |
20:55:54 | Araq | currently we use IDs that are counters instead |
20:56:03 | OrionPK | ahh I got ya |
20:57:46 | Varriount | Jehan_: Without some sort of external state, callbacks lose a great deal of their efficacy. The only way I can think of for protecting external state is to deep-copy it so that the callback is the only one that accesses the (copied) state. |
20:57:59 | Varriount | And that also has it's drawbacks... |
20:58:23 | Jehan_ | Varriount: Yes, that's what I'm talking about. Putting it in proper shared memory. |
21:00:55 | Varriount | Araq: Could a closure be marked, such that the compiler could put everything it references into shared memory, rather than thread local storage? |
21:03:25 | * | mwpher joined #nimrod |
21:03:40 | Araq | hi mwpher welcome |
21:03:58 | Araq | Varriount: dunno, I still don't really understand the problem |
21:04:00 | mwpher | Thanks :D |
21:04:16 | Jehan_ | What I'd like to have for this and similar problems is the ability to create shared heaps and a function `withHeap(heap, procvar, payload)`. |
21:05:06 | Varriount | Araq: What don't you understand? |
21:05:08 | dom96 | Araq: The problem is that the standard way to use IOCP with multiple threads is to spawn x amount of threads, and in each one ask the IOCP for notifications of IO completions. |
21:05:26 | Matthias247 | it's one way, but NOT the standard way |
21:05:37 | dom96 | Matthias247: What is the standard way? |
21:05:46 | Matthias247 | there is none, it depends on your use case |
21:05:53 | * | Mat3 quit (Quit: Verlassend) |
21:06:05 | * | BitPuffin quit (Quit: WeeChat 0.4.3) |
21:06:05 | dom96 | I'm pretty sure that's how IOCP was designed to be used. |
21:06:06 | Matthias247 | e.g. node.js will also use one thread for completions |
21:06:09 | Jehan_ | By the way, how is the originating thread notified? |
21:06:20 | * | brson joined #nimrod |
21:06:29 | dom96 | Matthias247: I can't even think of any other ways of doing this. |
21:06:30 | * | io2 joined #nimrod |
21:06:33 | Varriount | Jehan_: What do you mean. |
21:06:49 | Matthias247 | I think even the .NEt framework uses exactly one shared thread for querying the IOCP |
21:06:52 | Jehan_ | Is the callback simply executed when poll() succeeds or what? |
21:07:06 | Varriount | Jehan_: With regards to Nimrod, yes. |
21:07:20 | dom96 | Araq: Subsequently the IO completion notifications (the POverlapped object) contain a callback which is executed right after that notification is received. |
21:07:23 | Jehan_ | Varriount: I see. |
21:07:30 | dom96 | Araq: We can't execute these callbacks in multiple threads. |
21:07:38 | Araq | Jehan_: what's your withHeap proposal about? |
21:08:23 | Jehan_ | I'm not sure what the point is in having a closure callback, though, since the closure can't access the original heap with a thread running in it? Even if it weren't for the heap issues, that would be unsafe. |
21:08:43 | Matthias247 | querying results from multiple threads is only useful for special use-cases. Because then you have to care for synchronisiation between these results |
21:08:44 | Jehan_ | Araq: Functionality that I've been toying with implementing. |
21:09:23 | Jehan_ | Basically, the function would temporarily switch to a different heap and would execute procvar(copy(payload)), where payload is a string. |
21:09:48 | Jehan_ | In practice, payload would contain a serialized form of structured data. |
21:10:09 | Jehan_ | It would just be a simple way to have shared heaps and transmit data back and forth. |
21:10:21 | Araq | dom96: poll returns -> some callback is picked --> execute the callback via 'spawn'. Problem solved? |
21:10:56 | Jehan_ | Araq: The problem is that the callback closure can reference data in the original heap. |
21:11:19 | Araq | no, spawn will prevent that |
21:11:30 | Araq | but yes, it will create a copy of the data |
21:11:51 | dom96 | That may work. |
21:12:37 | dom96 | Although to be honest I worry that spawning a thread for each callback may be somewhat of a shotgun approach to this problem. |
21:12:54 | Matthias247 | I would simply query the completion port in the event loop of MY thread and execute the stored callback from there |
21:12:57 | dom96 | Perhaps it makes more sense for the user to decide where to place the spawns? |
21:13:00 | Araq | 'spawn' doesn't create a thread |
21:13:10 | Araq | spawn runs the task on the thread pool |
21:13:17 | dom96 | still |
21:13:34 | dom96 | I bet that adds a certain overhead |
21:13:49 | Jehan_ | dom96: Because arguments are being copied? |
21:14:02 | dom96 | In any case it seems unbeliavable that it's that simple :P |
21:14:43 | Araq | well yes, it is not |
21:14:58 | dom96 | Jehan_: possibly. It'll always be slower than simply executing the callback. I'm not sure about how spawn works. |
21:15:07 | dom96 | But that's my guess |
21:15:15 | Jehan_ | That still means that the thread issuing the request and the thread doing the poll must be the same one? |
21:15:28 | Araq | the spawned proc might enqueue stuff in the dispatcher |
21:15:44 | Araq | and how to do that is the harder part |
21:15:51 | Jehan_ | If that's the case, there's no cross-heap stuff involved and it should be safe? |
21:16:02 | Jehan_ | I.e. you shouldn't need spawn? |
21:16:53 | Jehan_ | If the closure doesn't wander from one thread to another in the first place, then there's no problem? |
21:17:10 | Matthias247 | Araq: normally you would sent a message to completion port which is then fetched by the dispatcher and executed. But when you forward everything to another thread you probably get an endless loop ;) |
21:18:57 | Araq | I can't follow. |
21:19:09 | Araq | Thread A: runs the polling loop |
21:19:23 | Araq | -> runs task on thread B |
21:19:24 | Matthias247 | I think it's mostly a question which kind of API you want in the end. Dou you want to have something like .NET's async programming model where a callback is executed on the threadpool and you can do blocking waits on the result? Then you need the background thread(pool). |
21:19:34 | Jehan_ | I was originally thinking there were multiple threads involved, but right now this seems to be a single thread that both queues requests and handles callbacks? |
21:19:35 | * | ARCADIVS quit (Quit: WeeChat 0.4.3) |
21:20:05 | * | mwpher quit (Quit: mwpher) |
21:20:07 | Matthias247 | or do you only want futures with continuations and async/await, then you can simply poll the queue from the main eventloop thread. complete the future from there and that will enqueue the continuation on the event loop thread\fs26 |
21:20:36 | * | retsej joined #nimrod |
21:21:10 | Araq | thread B: starts a new async operation. Problem: how does it tell thread A about it? |
21:21:18 | Matthias247 | Jehan_: dom96 want's to use multiple threads, but you can also only use one and have start and callback on the same |
21:21:35 | Jehan_ | Matthias247: Oh, so it's about a future design, not the current one? |
21:22:14 | dom96 | Jehan_: yes |
21:22:20 | Matthias247 | Jehan_: Sorry, can't tell you how Nimrods current implementation looks in detail |
21:22:26 | Jehan_ | Then the starting thread still must pass the closure environment to the dispatcher thread and spawn is insufficient. |
21:22:55 | Jehan_ | Matthias247: I'm reading it myself for the first time right now, not much different for me. :) |
21:23:16 | Araq | Jehan_: the way I see it: |
21:23:40 | * | BlameStross joined #nimrod |
21:24:11 | Jehan_ | dom96: What you seem to need is (in lieu of a closure) a procvar with an explicit environment. |
21:24:25 | Araq | it spawns a worker, the worker also gets some handle/channel which it can use to submit some other IO request |
21:24:26 | Jehan_ | With the environment being a ptr to the shared heap. |
21:25:30 | Jehan_ | The problem you still have is that any callback to the originating thread is impossible to do safely. |
21:26:10 | Jehan_ | Assuming that the callback can be arbitrary code. |
21:26:52 | Varriount | What if, instead of taking a callback based approach, the Window's IOCP model was used instead? |
21:27:58 | Varriount | Eg: A queue with notifications of IO completion that is shared among threads. |
21:29:24 | * | Trustable quit (Quit: Leaving) |
21:29:28 | Jehan_ | Varriount: What I'm not sure is how threads would use either design or how they'd benefit from it. |
21:30:00 | Jehan_ | Making the I/O asynchronous means that the original thread keeps running. |
21:30:25 | Matthias247 | and then other threads will poll that and get notifications about IOs that they haven't started? :) |
21:30:32 | * | goobles joined #nimrod |
21:31:05 | Jehan_ | So, how is it going to learn of the notification? Busywaiting? |
21:31:06 | Varriount | Matthias247: Yes, however, if I'm reading the Windows API correctly, there's a way to put notifications back into the queue. |
21:31:12 | Matthias247 | I think you should at first clarify for what exactly you need multiple threads |
21:31:26 | Matthias247 | and then look for a solution therefore |
21:31:41 | Jehan_ | What Matthias247 said. I.e., I'm not sure there's a clear model for how this is supposed to be used. |
21:31:51 | dom96 | So that we can scale to multiple cores. |
21:32:03 | Jehan_ | Once you have figured out how it's supposed to be used, you can implement it. |
21:32:13 | Varriount | The ideal situation for IOCP is in a client-per-thread approach, where multiple consumers are all requesting a resource from one producer. Normally, this kind of approach to asyncronous IO is bad, because of the contention this creates (the resource becomes available, and the thundering herd of threads all try to grab it). In the case of IOCP however, the OS explicitly controls which threads are passed the resource. |
21:32:13 | Varriount | Instead of all the threads waking up, only one is. |
21:32:21 | Jehan_ | dom96: That goes without saying, but how are these threads are supposed to do their work? |
21:32:53 | dom96 | Jehan_: They are supposed to accept and process as many connections as possible as fast as possible. |
21:34:04 | Jehan_ | dom96: You don't need async I/O for that. |
21:34:40 | Jehan_ | More importantly, it doesn't tell us how the threads are supposed to process connections. |
21:35:10 | Varriount | API Design is hard >_< |
21:35:29 | Matthias247 | you can still scale to multiple threads by using one completion queue in each thread (each threads eventloop) |
21:37:10 | * | Jesin joined #nimrod |
21:37:20 | Jehan_ | Honestly, a simple design to do that would be to use the current single-threaded polling loop and have callbacks simply use spawn for parellelism. |
21:38:03 | Matthias247 | with boost asio you can use both approaches: Using one io_service (proactor) from mulitiple threads or using one per thread. I started with the first approach, but it ended up in a synchronization nightmare |
21:39:12 | Matthias247 | Jehan_: I would spawn on application level. E.g. when you receive a callback or future continuation that your HTTP server accepted a new connection then move that connection to another thread which will handle it |
21:39:38 | Varriount | http://www.coastrd.com/windows-iocp |
21:39:39 | Jehan_ | Matthias247: That's exactly what I'm talking about. |
21:40:20 | Matthias247 | but not automatically move callbacks in the framework to arbitrary threads |
21:41:00 | Jehan_ | ??? |
21:41:55 | Jehan_ | Matthias247: Not sure what that last part was supposed to mean. |
21:43:16 | Matthias247 | Let's say you do socket.async_read(buffer, size).then((bytesRead) -> { print("Read " + bytesRead + " bytes"); }); |
21:43:30 | Matthias247 | Where should the continuation be executed? |
21:43:37 | dom96 | boost doesn't have a macro which builds on top of its async stuff like we do. |
21:44:01 | dom96 | I wonder if C#'s await scales to multiple cores. |
21:44:20 | Matthias247 | If you have multiple threads running that query for completions than it could be invoked on any core |
21:44:55 | Jehan_ | Matthias247: Yes, that's what I was talking about. |
21:44:55 | Matthias247 | dom96: .NET allows you to specify where to start the continuation. There's a SynchronizationContext parameter for Future.ContinueWith |
21:45:40 | Matthias247 | and async/await will query SynchronizationContext.Current and will set the Continuation to be executed in the same context where await was started |
21:46:19 | Jehan_ | That's what spawn is supposed to do. |
21:46:35 | Matthias247 | but that depends on the ability of Task<T> to interact with a scheduler |
21:47:33 | Jehan_ | The biggest problem that Nimrod currently has here is the very limited way of handling shared data. |
21:47:48 | Matthias247 | c++ will get that too: future<T>::then(std::executor&, std::function<void(future<T>)>) |
21:47:50 | Jehan_ | ways* |
21:48:15 | Matthias247 | you explicetly can specify on which executor/thread a callback will be scheduled |
21:48:21 | * | Varriount|Mobile joined #nimrod |
21:48:39 | Jehan_ | Which is why I keep harping on shared heaps (or some equivalent way of exchanging data). :0 |
21:50:02 | * | BitPuffin joined #nimrod |
21:54:08 | * | vendethiel- is now known as vendethiel-- |
21:57:41 | * | vendethiel-- is now known as vendethiel |
21:58:38 | Varriount|Mobile | Another intrresting article comparing the reactor and proactor: http://www.artima.com/articles/io_design_patternsP.html |
22:08:47 | Varriount|Mobile | The standard seems to be to have one proactor per thread, and only share the proactor across multiple threads when a generic thread pool can be easily used. |
22:21:44 | * | EXetoC quit (Quit: WeeChat 0.4.3) |
22:25:28 | Araq | Jehan_: I still don't understand your withHeap ... :-) |
22:26:12 | Jehan_ | Araq: Temporarily switches the current memory region (TMemRegion) to another one, copies payload to the new heap, executes the procvar argument. |
22:26:46 | Jehan_ | It's a very barebones way of having shared heaps for dealing with threads having to access structured data. |
22:27:02 | Jehan_ | shared structured data* |
22:27:27 | Araq | how does it execute the procvar? in a different thread? |
22:27:43 | Jehan_ | Same thread. |
22:28:21 | Jehan_ | It basically switches the current thread-local heap temporarily to a different one. Restores it upon return from the call. |
22:29:03 | Araq | and the point being? |
22:29:11 | Jehan_ | The shared heap will contain a hash table or queue or some other data structure that's too big to send across a channel. |
22:29:41 | * | ARCADIVS joined #nimrod |
22:29:42 | Jehan_ | One thread stuffs data in that data structure, others can take it out. |
22:30:22 | Araq | ok, so I switch to a *shared* heap |
22:30:50 | Jehan_ | Yup. obviously, you'll also need ways to create and destroy shared heaps. |
22:31:24 | Jehan_ | I'm not saying you should do that. It would be a way to get that functionality with relatively low effort, as far as I understand the current implementation. |
22:31:32 | Araq | there is an easier way |
22:31:57 | Jehan_ | Oh, and the reason why I mentioned it was that you can create closures within such a shared heap. |
22:32:47 | Jehan_ | As I said, it's a pretty barebones approach. That would be my biggest concern, creating something temporary that people may start using and then it would be difficult to get rid of it again. |
22:33:57 | Araq | how do you deal with the locking? |
22:34:16 | Jehan_ | Heap has an associated lock that withHeap acquires/releases. |
22:34:49 | Jehan_ | Because that's the only way to access the heap, it should be safe. |
22:35:13 | Jehan_ | Well, other than putting references in global variables, but that's no different from now. :) |
22:35:38 | Araq | we have a solution for that btw |
22:35:50 | Araq | it's already in 0.9.4, but disabled |
22:35:55 | Jehan_ | Oh? |
22:36:20 | Araq | it's an effect 'gcsafe' |
22:36:29 | Jehan_ | Ah, gotcha. |
22:36:59 | Araq | noSideEffect implies gcSafe and this is really beautiful |
22:37:02 | Jehan_ | Okay, to be clear, you'd also get a problem here assigning to thread-local variables that the current approach doesn't have. |
22:37:56 | Jehan_ | What I'm describing is basically the Erlang model again for the case that sending a message to a process has RPC semantics. |
22:38:46 | Araq | the problem with your solution as far as I can see is that the shared heap ... oh wait |
22:38:57 | Araq | hmm |
22:39:20 | Araq | I see |
22:39:42 | Jehan_ | Note that you still need to deep copy data in and out of the shared heap. |
22:40:06 | Araq | so thread A cannot pass its own heap to thread B, but some other allocated guarded heap |
22:40:17 | Jehan_ | Correct. |
22:40:30 | Araq | and this way you ensure thread A cannot access it either without holding the lock |
22:40:53 | Jehan_ | Eiffel's SCOOP also essentially works this way, just with a lot of extra mechanisms for making it convenient. |
22:41:00 | Araq | see? and this is why barriers are so sweet |
22:41:07 | Jehan_ | Huh? :) |
22:41:20 | Araq | with a barrier thread A can pass its own heap to B |
22:41:50 | Araq | the barrier can prevent that thread A runs while its heap is away |
22:42:05 | Jehan_ | By the way, there's another clever trick: Nobody says all heaps have to use the same allocation strategy. |
22:42:17 | Araq | I know |
22:42:25 | Jehan_ | So you can use a heap with a simple bump allocator for temporary stuff. |
22:42:37 | Araq | but every instruction on the allocation path is measurable |
22:43:06 | Araq | (kind of) |
22:43:12 | Jehan_ | Not sure how a barrier would work, though, I'd have used a condition variable? |
22:43:44 | Araq | it doesn't matter how it is implemented really I'm talking about the concept |
22:43:50 | Jehan_ | Ah. |
22:45:16 | Araq | spawn foo(myheap); spawn bar(myheap); sync; |
22:45:40 | Araq | --> foo and bar use the lock because of API design |
22:46:01 | Araq | spawning thread doesn't access its heap because it syncs |
22:46:34 | Jehan_ | Incidentally, I'm using the same approach (shared heaps) in my current day job. |
22:46:34 | flaviu | I assume the easiest way to make `string|uint64` is to create a variant? |
22:46:53 | Jehan_ | Because it has to be applied to a C++ code base several 100k in size and it's about the least painful way. |
22:47:17 | Jehan_ | flaviu: Yes, I'd say so. |
22:47:40 | Jehan_ | several 100 KLoc* |
22:48:18 | Jehan_ | variant records ARE sum types. :) |
22:48:43 | Jehan_ | Different syntax for type t = A of foo | B of bar in OCaml, essentially. |
22:48:49 | Araq | flaviu: no the easiest way is to use 'string' |
22:48:52 | Jehan_ | Okay, minus the reference. |
22:49:12 | Araq | a string can encode anything already, including uint64 |
22:50:03 | flaviu | I guess so, but I think that'll be more work than a variant |
22:50:31 | * | XAMPP-8 joined #nimrod |
22:51:23 | * | XAMPP_8 joined #nimrod |
22:53:41 | * | Matthias247 quit (Read error: Connection reset by peer) |
22:54:01 | Jehan_ | Araq: Would you be interested in having a binary variant of marshal.nim, by the way? |
22:54:11 | Araq | sure |
22:54:51 | * | XAMPP-8 quit (Ping timeout: 240 seconds) |
22:55:04 | Jehan_ | Okay, I may do that later this week. |
22:55:35 | Jehan_ | I need most of the functionality for something else, may as well share if there's interest. |
22:56:14 | Araq | dom96: pulled my babel PR? |
22:59:07 | * | XAMPP_8 quit (Ping timeout: 240 seconds) |
23:01:49 | dom96 | Araq: did you fix your mistakes? |
23:02:06 | Araq | yes, I hope so. even read your docs. |
23:03:20 | dom96 | Araq: It will work with 0.9.4? |
23:03:30 | Araq | pretty sure it does, yes |
23:04:18 | NimBot | nimrod-code/packages master a026b60 Araq [+0 ±1 -0]: added c2nim and pas2nim packages |
23:04:18 | NimBot | nimrod-code/packages master 4af0465 Araq [+0 ±1 -0]: proper tagging |
23:04:18 | NimBot | nimrod-code/packages master 9e89ba1 Dominik Picheta [+0 ±1 -0]: Merge pull request #67 from Araq/master... 2 more lines |
23:04:24 | dom96 | voila |
23:04:37 | Araq | thanks |
23:05:25 | Jehan_ | Ugh, I think I may not be able to do the general version I envisioned. Hmm, will see. :) |
23:07:27 | Araq | Jehan_: we can get bump pointer allocation with 0 cost in rawAlloc |
23:07:35 | Araq | with a simple trick |
23:07:46 | dom96 | Araq: it works. You forgot to increment the version output for -v |
23:07:52 | Araq | now I'm thinking about rawDealloc |
23:08:09 | Araq | dom96: ok ... thanks |
23:08:36 | Jehan_ | Araq: Hmm, what I mentioned earlier was just "allocate only, don't bother with deallocating, just throw the entire heap away when done". |
23:09:13 | dom96 | Finally. A blog post talking about Go's faults http://yager.io/programming/go.html |
23:09:44 | Jehan_ | Finally? Haven't there been hundreds? :) |
23:10:07 | Jehan_ | That said, in all fairness, Go does have a major benefit: simplicity. |
23:10:32 | dom96 | perhaps. It's finally on the front page of HN though. |
23:11:00 | Jehan_ | I'd personally argue that they oversimplified in some unnecessary places, but they should also get credit for the value of it. |
23:11:03 | flaviu | Jehan_: Why should I use Go over Java? |
23:11:23 | flaviu | Java is also very simple, if you don't go looking for complexity |
23:11:23 | Jehan_ | Because Java is a pretty heavyweight white elephant? |
23:11:31 | Jehan_ | Huge startup times, huge memory footprint. |
23:11:37 | Jehan_ | Java isn't simple. |
23:11:43 | Jehan_ | Java 1.0 was, in some ways. |
23:11:48 | dom96 | Pity that blog post doesn't mention Nimrod. |
23:11:57 | flaviu | dom96: Write one yourself |
23:12:03 | dom96 | flaviu: Already did. |
23:12:19 | Jehan_ | dom96: Probably because the author doesn't know it. Nimrod is a bit of a dark horse. |
23:12:35 | flaviu | dom96: I don't see it on your page, is it elsewhere? |
23:12:43 | Jehan_ | I know about it because I look at programming languages all the time, especially obscure ones. |
23:13:13 | dom96 | flaviu: It's not about Go being bad though. It's about why you should use Nimrod. |
23:13:14 | Jehan_ | Nimrod was just one that I kept using. |
23:15:30 | Araq | oh it's about "simplicity" again ... |
23:15:33 | * | Araq sighs |
23:15:35 | Jehan_ | Reading the blog post now and not very impressed with some of the criticism. |
23:15:55 | Jehan_ | It seems to be another "This language isn't enough like Haskell" posts. |
23:16:08 | dom96 | Yeah, it's not the greatest. |
23:16:20 | Araq | what is simple now was advanced 20 years ago |
23:16:41 | dom96 | good night |
23:16:50 | flaviu | Jehan_: I still don't understand why I should use Go over Java 1.0. Both are missing generics, any semblance of performance, operator overloading. Main advantage of Go seems to be a bit of type inference |
23:17:08 | Araq | simple means *old* |
23:17:26 | Jehan_ | flaviu: Performance. |
23:17:35 | Jehan_ | if you're talking 1.0 :) |
23:17:41 | goobles | Go is repulsive;0 |
23:17:46 | Jehan_ | 1.0 didn't have a compiler, it was a bytecode interpreter. |
23:17:50 | Araq | I'm pretty sure function calls were an advanced "complex" feature once |
23:17:54 | flaviu | Jehan_: IIRC go also has subpar performance |
23:18:10 | Jehan_ | Araq: Yes. Think BASIC (gosub) and COBOL. |
23:18:24 | Jehan_ | flaviu: Note bytecode-interpreter-bad. |
23:18:26 | Jehan_ | Not* |
23:18:28 | Araq | "This is great, because it forces programmers to ask if they really need that variable to be mutable, which encourages good programming practice and allows for increased optimization by the compiler." |
23:18:40 | Jehan_ | And, as I said, the JVM has issues with startup time and memory footprint. |
23:18:44 | Araq | yes, and I *really* need that mutable state, get lost |
23:19:06 | Jehan_ | Araq: That's exactly what I meant by "this language isn't enough like Haskell". |
23:19:11 | Araq | also immutability is often very hard to optimize *away* |
23:19:25 | * | darkf joined #nimrod |
23:20:12 | Jehan_ | You can achieve immutability by encapsulating data and not providing methods to mutate it. |
23:21:11 | flaviu | Jehan_: one of the things that people hate most in java is the "getFoo", "setFoo" crap |
23:21:41 | Jehan_ | Mind you, there are some places where I like it if you can declare things as immutable, but I don't think that's what the author had in mind. |
23:22:03 | Jehan_ | flaviu: Not really specify to Java, though. :) |
23:22:24 | Jehan_ | It's also something that I liked about Sather back in the days. |
23:22:33 | fowl | imperative uber alles |
23:22:39 | flaviu | Araq: I don't really know about that. The immutability thing can be thrown away after semantic checking; I don't see how it'd hurt things |
23:23:28 | Jehan_ | fowl: I don't have a very strong preference for functional or imperative programming, but I think that *pure* functional programming is actively harmful. |
23:23:57 | flaviu | Also, immutability makes it easier to reason about a program, if anything. If you declare stuff one way, you don't have to worry about it changing which lets you not have to think of that variable |
23:24:11 | * | pafmaf quit (Quit: This computer has gone to sleep) |
23:24:22 | Jehan_ | flaviu: The thing is that immutability is only one invariant, and a very simple one. |
23:24:29 | * | joelmo quit (Quit: Connection closed for inactivity) |
23:25:00 | Jehan_ | Generally, you express invariants over ADTs by restricting the methods that you can use to mutate them. |
23:25:14 | Jehan_ | Immutability is the simple case where you have no methods that can do that. |
23:26:04 | flaviu | Jehan_: I'm speaking of local immutability here mostly |
23:26:27 | flaviu | eg, I declare a local variable at the top and I know that even if I don't read half the method, it'll be the same |
23:27:16 | Jehan_ | flaviu: That's one place where immutability comes in handy, yes. See, e.g., let in Nimrod. |
23:27:27 | Jehan_ | But that's not what the author is talking about. |
23:27:50 | Jehan_ | The biggest problems with mutable state in my experience have been procedures that change some random global variable. |
23:30:18 | Araq | flaviu: immutability loves trees, the hardware loves arrays |
23:30:42 | Jehan_ | Araq: Amen. |
23:31:13 | Araq | that's why you won't find a functional systems programming language |
23:31:18 | Jehan_ | Why don't people in computer algebra love Haskell? Because 10000x10000 matrices over finite fields are a PITA in Haskell. |
23:31:33 | flaviu | I see what you mean, immutability encourages less efficient data structures |
23:31:50 | flaviu | not necessarily that the compiler has a harder time |
23:31:59 | Araq | yes |
23:32:16 | Araq | or that the compiler HAS a hard time to transform a tree into an array |
23:32:27 | Araq | or even things like: |
23:32:42 | Araq | let (data, success) = foo(data) |
23:33:01 | Araq | --> let success = foo(addr data) |
23:33:08 | Araq | are not trivial to do |
23:33:22 | Jehan_ | By the way, Araq, I have to write this down so that I can quote it. :) |
23:33:47 | * | io2 quit () |
23:33:52 | Araq | alright. |
23:36:06 | Araq | when C++ was new all this overloading, default parameters etc. was "complex" |
23:36:19 | Araq | ok C++ still gets lots of blame for overloading |
23:36:33 | goobles | whats wrong wid overloading;0 |
23:36:35 | Araq | but Java and C# have it too and Java is "simple" |
23:36:51 | Araq | simplicity doesn't mean much at all |
23:36:51 | Jehan_ | Overloading is a double-edged sword. |
23:37:24 | goobles | no it is awesome, C++ should let me overlaod every symbol |
23:37:34 | goobles | why can't i overload my $ |
23:37:38 | goobles | WAAHH |
23:38:06 | Jehan_ | Araq: I think there's a difficulty threshold where a significant percentage of programmers begin to struggle. |
23:38:31 | goobles | struggle wid what overloading? |
23:38:53 | Araq | Jehan_: that threshold is not static. |
23:39:04 | Jehan_ | Which, to get back to an earlier topic, is why I think there is a clear audience for things like Crystal. |
23:39:18 | Jehan_ | Araq: But only because of the Flynn effect. :) |
23:39:26 | Araq | In 20 years people have other new "complex" features to worry about |
23:40:05 | Jehan_ | For what it's worth, I think that C++ is objectively too complex for large software systems if it isn't reined in by using a subset. |
23:40:23 | Araq | no, that is exactly not the problem. |
23:40:38 | Araq | C# is very complex too and nobody really complains |
23:40:50 | Araq | it has to be very complex for the simple reason it has LOTS of features |
23:41:01 | Jehan_ | C# is easier in some very fundamental ways. |
23:41:08 | Araq | C++'s problem is the lack of *memory safety* |
23:41:08 | goobles | crystal never heard of it... oh boy yet another language using conservative collector |
23:41:12 | Jehan_ | For starters, automatic memory management. |
23:41:21 | Jehan_ | Heh. :) |
23:41:41 | Araq | C# has it and proves complexity with safety is entirely workable |
23:42:00 | Jehan_ | But yeah, the hoops that you have to jump through to deal with C++ lack of automatic memory management is a huge contributor to its complexity. |
23:42:22 | goobles | C++ has automatic memory management;0 |
23:42:23 | Jehan_ | And it lacks (or lacked) features that require other features that are actually more complex. |
23:42:44 | Jehan_ | goobles: Shared pointers are a joke, if you mean that. :) |
23:42:51 | Araq | even that wouldn't be that much of a problem if it was *safe* after the compiler accepts your programs |
23:42:57 | goobles | no mostly unique_ptrs Jehan |
23:43:01 | Jehan_ | They're a performance hog, which is why everybody works around them. |
23:43:03 | goobles | shared_ptr is rare |
23:43:09 | Jehan_ | unique_ptrs aren't automatic memory management. |
23:43:16 | goobles | yes they are |
23:43:18 | Jehan_ | They're by definition manual memory management. |
23:43:32 | goobles | nope they automatically clean up |
23:43:35 | Jehan_ | Automatic memory management means you don't have to worry about ownership and lifetime. |
23:43:53 | flaviu | Araq: I still don't understand your lambda idea |
23:44:11 | Jehan_ | Different definitions: Automatic memory management, as it's understood in the literature, means garbage collection, reference counting, and such. |
23:44:16 | flaviu | from last night |
23:44:18 | Araq | flaviu: you mean my lifting operator @ ? |
23:44:22 | flaviu | Yes |
23:44:33 | goobles | you always must worry about ownership and lifetime, even in a GC language or you will hold on to shit forever |
23:44:56 | Araq | it's simply a syntactical feature to make a ternary operator a binary operator |
23:45:09 | Araq | a @`+` b |
23:45:12 | Araq | becomes |
23:45:17 | Jehan_ | goobles: Funnily enough, LISP has had GC since the 1950s and has never worried about either. |
23:45:24 | Araq | @(`+`, a, b) |
23:45:46 | flaviu | Oh, I see |
23:46:17 | Araq | the primary use case seems to be Lifting |
23:46:18 | Jehan_ | Hmm, is the first argument fixed or can it be a variable? |
23:46:20 | goobles | i'm sure they did, it is pretty easy to tuck away a GC reference that keeps something alive way beyond when it should have died |
23:46:38 | Araq | but I'm sure we'll find lots of others |
23:46:58 | Araq | Jehan_: can be a variable |
23:47:28 | flaviu | TBH I don't really like the way it looks, the @ looks weird |
23:47:36 | Jehan_ | Araq: Not sure I understand the intended application? |
23:48:01 | Araq | flaviu: the @ is just a placeholder for any operator symbol |
23:48:23 | Araq | Jehan_: lift an operator on the fly to e.g. seqs |
23:48:42 | Araq | a + b # + for atomic values |
23:48:53 | Araq | a @`+` b # vector addition |
23:49:18 | flaviu | Araq: Whats your opinion on http://stackoverflow.com/a/8001065/2299084 "Placeholder syntax"? |
23:49:43 | Araq | flaviu: we might get it, _ is currently not even a token |
23:49:55 | Jehan_ | Hmm, so foldl, basically? |
23:50:16 | Araq | foldl is a reducer |
23:50:25 | Jehan_ | Oh, I see. |
23:50:27 | Araq | basically like 'map' |
23:50:42 | Jehan_ | Got it. Not sure why it needs extra syntax, though? |
23:51:43 | Jehan_ | Big problem with operators for non-obvious uses is that you can't effectively grep or Ctrl-F for them. |
23:52:08 | goobles | who cares thats crappy linux stuff;0 My IDE can find them just fine |
23:52:54 | flaviu | I think a good idea is to design the language for idiots, unless there's a good reason otherwise, and I'm not really sure that this would pass that test |
23:53:00 | Jehan_ | goobles: I'm talking about documentation. |
23:53:14 | Araq | goobles: I agree. :-) |
23:53:39 | Jehan_ | Try googling for "operator []". |
23:53:49 | goobles | oh like on google? |
23:53:53 | Araq | flaviu: design for idiots and you get Go and Java |
23:54:11 | Jehan_ | Not just Google, any form of unstructured text. |
23:54:21 | * | johnsoft quit (Ping timeout: 240 seconds) |
23:54:28 | goobles | well the syntax for overloading operators in C++ is pretty stupid |
23:54:33 | goobles | but they are still useful |
23:54:40 | * | johnsoft joined #nimrod |
23:54:54 | Araq | Jehan_: and yet [] is already an operator in many many languages |
23:54:55 | Jehan_ | Think web pages, ebooks, PDFs, generated docs, etc. |
23:55:26 | Jehan_ | Araq: Yes, and because it's pretty obvious how it's used, that's mostly not a problem. |
23:55:32 | Jehan_ | [] is for indexing by convention. |
23:55:37 | Araq | what if I need to google C#'s ?? operator? |
23:55:42 | Araq | that's not overloadable |
23:55:49 | * | XAMPP-8 joined #nimrod |
23:55:50 | Araq | but how does that help you googling it? |
23:55:56 | flaviu | Araq: But on the other hand, people USE java ;p |
23:55:57 | Jehan_ | Araq: That's why I'm not a big fan of that, either. |
23:56:48 | Jehan_ | It's not about overloadability, it's about using a bunch of special characters. |
23:56:54 | Jehan_ | Or worse, a single special character. :) |
23:57:34 | Jehan_ | Tools for unstructured text tend to deal in words, at worst with numbers and possibly underscores/hyphens mixed in. |
23:57:43 | Jehan_ | s/worst/best/* |
23:58:27 | flaviu | Of course, you have to balance things out; making constructs powerful while keeping them simple, ideas that are somewhat at odds to each other |