00:03:23 | reactormonk_ | araq, nah, stuff breaks with my solution. We can't do that :-/ |
00:08:27 | * | freezerburnv joined #nimrod |
00:09:25 | reactormonk_ | o/ freezerburnv |
00:09:38 | freezerburnv | o/ reactormonk_ |
00:15:35 | * | xenagi joined #nimrod |
00:19:42 | * | xenagi quit (Client Quit) |
00:21:06 | * | xenagi joined #nimrod |
00:49:42 | * | bitaco joined #nimrod |
00:59:47 | * | nande quit (Ping timeout: 245 seconds) |
01:02:02 | * | q66 quit (Quit: Leaving) |
01:05:56 | * | hoverbear joined #nimrod |
01:08:15 | * | sdw joined #nimrod |
01:16:00 | * | darkfusion joined #nimrod |
01:28:56 | * | nande joined #nimrod |
01:48:46 | Varriount | !seen filwit |
01:48:46 | NimBot | filwit was last seen on Wed Jun 4 14:48:18 2014 quitting with message: Quit: Leaving |
01:49:11 | Varriount | !seen Varriount-Mobile |
01:49:11 | NimBot | Varriount-Mobile was last seen on Thu Jun 12 16:40:46 2014 quitting with message: Remote host closed the connection |
02:01:12 | * | Boscop_ joined #nimrod |
02:02:37 | * | Boscop quit (Read error: Connection reset by peer) |
02:16:11 | * | def-_ joined #nimrod |
02:19:44 | * | def- quit (Ping timeout: 252 seconds) |
02:29:30 | * | BitPuffin joined #nimrod |
02:44:47 | * | flaviu quit (Remote host closed the connection) |
02:52:11 | * | Jesin quit (Ping timeout: 240 seconds) |
03:03:04 | * | Jesin joined #nimrod |
03:14:51 | * | bitaco quit (Quit: leaving) |
03:24:16 | * | Kazimuth quit (Quit: gas leak) |
03:26:01 | * | freezerburnv quit (Quit: freezerburnv) |
03:28:38 | * | hoverbear quit () |
03:29:28 | * | Demos joined #nimrod |
03:34:32 | * | Demos quit (Ping timeout: 244 seconds) |
03:34:58 | * | def- joined #nimrod |
03:38:23 | * | def-_ quit (Ping timeout: 252 seconds) |
03:42:47 | * | bjz_ joined #nimrod |
03:48:09 | * | bjz_ quit (Ping timeout: 255 seconds) |
03:52:07 | * | saml_ joined #nimrod |
03:52:48 | * | xenagi quit (Quit: Leaving) |
04:22:51 | * | OrionPK quit (Remote host closed the connection) |
04:42:26 | * | def-_ joined #nimrod |
04:46:02 | * | def- quit (Ping timeout: 252 seconds) |
05:02:42 | * | BitPuffin quit (Ping timeout: 245 seconds) |
05:19:27 | * | gsingh93_ quit (Quit: Connection closed for inactivity) |
05:19:40 | * | saml_ quit (Quit: Leaving) |
05:24:21 | * | def- joined #nimrod |
05:27:50 | * | def-_ quit (Ping timeout: 252 seconds) |
05:38:43 | * | xtagon quit (Quit: Leaving) |
05:41:37 | * | def-_ joined #nimrod |
05:44:20 | * | def- quit (Ping timeout: 252 seconds) |
05:45:30 | * | bjz_ joined #nimrod |
05:50:03 | * | bjz_ quit (Ping timeout: 240 seconds) |
05:54:57 | * | def- joined #nimrod |
05:58:05 | * | def-_ quit (Ping timeout: 252 seconds) |
06:10:02 | * | BitPuffin joined #nimrod |
06:14:41 | * | fowl quit (Ping timeout: 264 seconds) |
06:27:15 | * | fowl joined #nimrod |
06:50:17 | * | dirkk0 joined #nimrod |
06:50:59 | * | def-_ joined #nimrod |
06:54:11 | * | def- quit (Ping timeout: 252 seconds) |
07:00:58 | * | nande quit (Read error: Connection reset by peer) |
07:22:00 | * | io2 joined #nimrod |
07:31:02 | * | dirkk0 left #nimrod ("Leaving") |
07:48:33 | * | def- joined #nimrod |
07:49:11 | * | def-_ quit (Ping timeout: 252 seconds) |
07:49:14 | * | kunev joined #nimrod |
08:09:29 | * | io2 quit (Read error: Connection reset by peer) |
08:11:11 | * | io2 joined #nimrod |
09:00:24 | * | freezerburnv joined #nimrod |
09:04:47 | * | freezerburnv quit (Ping timeout: 245 seconds) |
11:19:45 | * | Guest63531 quit (Quit: Leaving) |
11:29:15 | * | xenagi joined #nimrod |
12:07:31 | * | flaviu joined #nimrod |
12:19:29 | * | untitaker quit (Ping timeout: 264 seconds) |
12:24:32 | * | untitaker joined #nimrod |
12:30:50 | * | zshazz joined #nimrod |
12:37:08 | * | xenagi quit (Quit: Leaving) |
12:50:10 | * | Boscop_ is now known as Boscop |
13:04:25 | * | darkf quit (Quit: Leaving) |
13:42:36 | * | BitPuffin quit (Ping timeout: 255 seconds) |
13:53:41 | * | noam quit (Ping timeout: 264 seconds) |
13:56:17 | * | OrionPK joined #nimrod |
13:57:59 | OrionPK | hola |
14:03:41 | araq | servus |
14:04:11 | * | freezerburnv joined #nimrod |
14:54:46 | * | superfunc quit (Quit: leaving) |
14:55:15 | * | superfunc joined #nimrod |
14:55:40 | * | superfunc quit (Client Quit) |
14:55:46 | * | superfunc_ quit (Quit: Page closed) |
14:58:49 | * | superfunc joined #nimrod |
15:04:09 | * | Raynes quit (Max SendQ exceeded) |
15:04:18 | * | Raynes joined #nimrod |
15:04:34 | * | Raynes quit (Changing host) |
15:04:34 | * | Raynes joined #nimrod |
15:07:40 | * | BitPuffin joined #nimrod |
15:16:57 | * | KevinKelley joined #nimrod |
15:20:12 | dom96 | hi KevinKelley |
15:29:05 | * | Jesin quit (Ping timeout: 244 seconds) |
15:33:27 | * | kunev quit (Quit: leaving) |
16:03:19 | * | Matthias247 joined #nimrod |
16:18:56 | * | Trustable joined #nimrod |
16:19:24 | * | Raynes quit (Max SendQ exceeded) |
16:28:35 | * | Raynes joined #nimrod |
16:28:51 | * | Raynes quit (Changing host) |
16:28:51 | * | Raynes joined #nimrod |
16:38:23 | * | raleigh joined #nimrod |
16:50:50 | * | q66 joined #nimrod |
16:50:50 | * | q66 quit (Changing host) |
16:50:50 | * | q66 joined #nimrod |
17:06:27 | araq | hi raleigh welcome |
17:08:10 | * | Jehan_ joined #nimrod |
17:12:44 | * | askatasuna joined #nimrod |
17:15:01 | * | askatasuna quit (Client Quit) |
17:15:48 | * | Matthias247 quit (Read error: Connection reset by peer) |
17:19:23 | araq | hi KevinKelley. thanks for the Cairo binding but I think we already have one. Possibly outdated though. :-) |
17:20:09 | reactormonk_ | araq, as mentioned, the float idea doesn't work too well. |
17:22:09 | * | bastian__ joined #nimrod |
17:22:19 | bastian__ | hey |
17:24:04 | flaviu | Seems I can't call an iterator recursivly |
17:24:24 | bastian__ | say I have a library which has a proc returning a type that is not exported. is there any way to create a reference to that type without explicitly mentioning it? basically an "any" type |
17:26:02 | raleigh | araq, thank you |
17:33:55 | flaviu | It seems I can't forward-declare an iterator. The manual doesn't seem to mention forward declaring iterators, I assume it isn't supposed to be possible. |
17:34:24 | Jehan_ | flaviu: Hmm, haven't encountered that yet, but hadn't really tried it, either. |
17:37:37 | Jehan_ | As to calling iterators recursively, any time you try to yield more than one stack frame down the entry point (barring inlining, which you can't do recursively), you need coroutines or continuations, which are problematic in C. |
17:38:03 | Jehan_ | … which you can't do with recursion ... |
17:40:24 | flaviu | my problem basically boils down to `iterator iter(n: int):Foo = for i in iter(n-1): yield someProc(i)` |
17:40:24 | flaviu | I don't see why closure iterators would have a problem |
17:42:07 | Jehan_ | Hmm, that's odd. You normally can use iterators in iterator bodies, you just can't yield from them. |
17:42:37 | Jehan_ | Hmm, wait. |
17:44:15 | * | xtagon joined #nimrod |
17:44:47 | Jehan_ | Hmm, the manual explicitly prohibits recursive iterators of either kind, but I always read that differently. |
18:35:56 | superfunc | exit |
18:35:58 | * | superfunc quit (Quit: leaving) |
18:40:49 | araq | bastian_ you can use type(foo()) |
18:41:01 | araq | no need to name the type |
18:41:48 | araq | flaviu: well recursive iterators don't work, maybe closure iterators can be made to work ... |
18:45:45 | * | onr joined #nimrod |
18:45:48 | * | snearch joined #nimrod |
18:46:12 | flaviu | Probably, but I realized that laziness wasn't absolutely necessary in in situation and I can just use seqs |
18:46:34 | araq | pff, eliminating the recursion by hand is usually not hard |
18:46:51 | araq | well it's always a good practice :P |
18:47:01 | araq | hi snearch, onr welcome |
18:47:16 | onr | hello |
18:48:42 | onr | Nimrod compiles to ANSI C, which can compile on all platforms? |
18:49:24 | araq | it's more of "realworld C" |
18:49:50 | araq | we use C compiler specific extensions whenever we feel like it |
18:50:13 | araq | but yes, it's very portable |
18:53:01 | * | araq is now known as Anwender |
18:53:32 | onr | using C/C++ libraries must be pretty easy to use in Nimrod then |
18:55:25 | flaviu | Anwender: Is the official nimrod brace style 1TBS or is it K&R? |
18:55:35 | Jehan_ | C yes, C++ is not entirely trivial (different object model and such). |
18:56:06 | Anwender | flaviu: why do we need a brace style? |
18:56:23 | flaviu | https://gist.github.com/flaviut/841f3ea5a8a303433cfb |
18:56:35 | Anwender | hmm ... why am I Anwender |
18:56:39 | * | Anwender is now known as Araq |
18:57:16 | Jehan_ | Dunno. You just changed your nickname for some reason. Tab completion gone wrong? |
18:57:41 | Araq | nah, I switched computers |
18:58:37 | * | kunev joined #nimrod |
18:58:40 | flaviu | I was nearly joking, and I'll always use 1TBS no matter what. |
19:02:53 | Araq | flaviu: does that gist work? |
19:03:09 | flaviu | Works for me. Reload? |
19:04:11 | EXetoC | I always use 1TBS I think |
19:04:37 | flaviu | method name(params) {\nbody\n}? |
19:05:35 | EXetoC | yep |
19:05:43 | Araq | flaviu: well I'm often surprised about what my compiler can deal with :P |
19:06:05 | EXetoC | I also like "x\n+ y\n+ z". not many people do that |
19:06:55 | Araq | flaviu: use whatever takes less screen space, I'd even use |
19:07:10 | Araq | if (foo) { |
19:07:23 | Araq | printf(); } |
19:07:40 | Araq | if that would be acceptable by others ;-) |
19:08:49 | flaviu | Wikipedia calls it lisp style |
19:09:29 | EXetoC | the whitesmiths style is superior though |
19:09:39 | EXetoC | honestly |
19:10:31 | EXetoC | ok the GNU style is the one I was thinking of. it's slightly worse |
19:14:14 | Araq | lol whitesmiths style |
19:14:28 | Araq | that's an abomination |
19:14:50 | flaviu | I agree |
19:14:51 | Jehan_ | Araq: Let's hope the reason isn't that the compiler is secretly hatching Skynet. :) |
19:17:51 | Araq | teh |
19:18:09 | Araq | the compiler is simple ... except that lambda-lifting pass |
19:18:16 | Jehan_ | And the above is why I don't like braces and prefer the Modula-2 (minus the all caps stuff)/Eiffel approach. |
19:18:40 | Jehan_ | Not sure why everybody thinks emulating C syntax is the way to go. |
19:20:49 | Jehan_ | Though I can live with Scala's compromise. |
19:21:07 | Jehan_ | At least they put types where they belong, i.e. after the variable name and a colon. :) |
19:21:12 | flaviu | Jehan_: Because no one wants to type `begin` and `end`, when `{` and `}` are much shorter. |
19:21:38 | Jehan_ | flaviu: "begin" is not an Eiffel keyword. |
19:21:48 | Jehan_ | You're thinking of Pascal. |
19:21:59 | Jehan_ | Which has the same problems as C, plus is more verbose. |
19:22:09 | flaviu | Modula-2 actually |
19:22:44 | flaviu | Ah, Eiffel is `do`, `end`. I'm still not a big fan. |
19:23:15 | Jehan_ | The point is the comb style of identation. |
19:23:48 | Jehan_ | Pythonesque syntax also gets you that, of course. |
19:24:22 | Araq | norton antivr update is 200MB ... my hard disk used to have less space than that back in the days ... |
19:24:53 | Araq | and what do people do without a fast internet connection? |
19:25:00 | flaviu | Antivirus? Pfft. |
19:25:02 | Jehan_ | Araq: I believe my first hard drive (Atari ST) was something like 10 MB. |
19:25:58 | Araq | well my first PC was a 486 with 40MB HD iirc |
19:26:06 | Jehan_ | I remember writing long programs for Z80 where the source code was too big to fit in memory and had to be included (spread across several files) from tape. |
19:26:21 | Jehan_ | Cassette tape, of course. |
19:27:49 | Jehan_ | It did teach you being modular (so that you could translate things in part) and avoid having errors in your code. |
19:28:29 | Jehan_ | I'm still amazed that we got anything done at all under these circumstances. |
19:28:41 | * | snearch quit (Quit: Verlassend) |
19:28:54 | Araq | well that was before the internet :P |
19:29:03 | Araq | everybody was much more productive back then |
19:29:21 | Jehan_ | Eh, it's not like there weren't computer games to play. :) |
19:29:41 | Jehan_ | A real procrastinator doesn't need the internet. |
19:29:56 | Araq | yeah good point |
19:30:01 | Araq | uh oh |
19:30:21 | * | Araq remembers Master of Magic |
19:30:28 | * | Araq must ... resist ... |
19:30:40 | Jehan_ | But it gave me a lifelong preference for productivity features over low-level stuff in programming languages. |
19:31:05 | Jehan_ | Araq: Never heard of that. |
19:31:32 | Jehan_ | I still have a functioning version of "Lords of Midnight", though. |
19:31:44 | Jehan_ | Still one of the best strategy games. |
19:39:09 | Araq | never heard of that :P |
19:43:48 | flaviu | I may have underestimated the amount I have to process by an order of magnitude... |
19:43:55 | flaviu | I guess I do need lazyness |
19:45:17 | * | Jehan_ quit (Read error: Connection reset by peer) |
19:45:18 | * | JehanII joined #nimrod |
19:47:36 | * | Matthias247 joined #nimrod |
19:47:42 | Araq | flaviu: I have no idea what you're doing, but let me recommend sqlite to you |
19:48:29 | * | Raynes quit (Ping timeout: 264 seconds) |
19:48:30 | Araq | usually when I have lots of data to process I get it wrong and some database enables an incremental approach |
19:49:40 | flaviu | I'm generating all the combinations of a set. Does sqlite allow me to run a PCRE and store the result in a collumn? |
19:50:26 | Araq | well it allows storing the result in a table ;-) |
19:50:49 | Araq | all combinations of a set sounds like a bad though |
19:51:33 | Matthias247 | Master of Magic? That was brilliant ;) |
19:51:36 | flaviu | Awesome, `Set test1=true Where x Regexp 'myregex' ` should work |
19:59:18 | Matthias247 | i thought sql is out and nosql is the new shit :) |
20:01:40 | Varriount | Matthias247: Yeah, but then people realized that they like things like atomicity |
20:03:13 | Matthias247 | don't they all have transactions? And I guess single instructions are always atomic from the user-point-of-view |
20:04:43 | * | Raynes joined #nimrod |
20:04:55 | Matthias247 | but I was fine with SQL when I last used it (a loooooong time ago) |
20:04:58 | * | Raynes quit (Changing host) |
20:04:58 | * | Raynes joined #nimrod |
20:05:40 | * | superfunc joined #nimrod |
20:06:07 | Varriount | Matthias247: http://www.hakkalabs.co/articles/nosql-sql-build-schema-free-scalable-data-storage-inside-traditional-rdbms |
20:06:33 | superfunc | sup everybody |
20:07:29 | Varriount | Hi! |
20:13:57 | superfunc | Man I wish I could use nim for my work project. Trying to explain boost and C++11 to C# guys is a real struggle |
20:14:23 | superfunc | The second we got to move semantics, their collective mindholes blew open |
20:17:08 | Matthias247 | hehe. Absolutely believe it |
20:17:38 | Matthias247 | generally you won't do yourself a favor when you force C# or java developers to do C or C++ |
20:17:47 | Skrylar | superfunc: so your coworkers are talentless hacks who think clicking buttons in MSVC is how programs are made... how is nimrod going to help that :( |
20:18:30 | EXetoC | cc1plus memory overload |
20:18:32 | Varriount | For the sake of filling in the hole of my ignorance, what are move semantics? |
20:18:47 | Skrylar | Varriount: stuff like "if you move this value from A to B, it no longer exists in A" |
20:18:50 | * | kunev quit (Ping timeout: 252 seconds) |
20:18:57 | Matthias247 | Varriount: The Type&& stuff |
20:19:16 | superfunc | Lol, they are good developers, and good guys, just not experienced with lower languages |
20:19:28 | Skrylar | hand them a copy of vim lol |
20:19:36 | Varriount | Matthias247: Oh, double pointers? |
20:19:41 | Skrylar | if they learn vim, they are worthy of computer science XD |
20:19:42 | Matthias247 | func(Type&& typeThatWillbeMovedInsteadOfCopied) {} func(std::move(SomeStructure()); |
20:19:43 | Skrylar | anyway |
20:20:02 | Matthias247 | Varriount: no, double refernces, aka rvalue references |
20:20:39 | flaviu | Not nimrod, but can I join a single column table on itself in SQL? |
20:20:56 | superfunc | The main purpose is to avoid wasteful copies |
20:21:00 | Araq | flaviu: yes |
20:21:05 | Varriount | Matthias247: You're talking with someone who only has a nodding acquaintance with C++ |
20:21:13 | Matthias247 | I think c++11 has enough features in the meanwhile to blew everyones heads |
20:21:24 | superfunc | It does |
20:21:26 | Skrylar | Matthias247: only because its poorly engineered |
20:21:30 | Araq | I like C++11 in theory |
20:21:55 | Skrylar | "congratulations C++ community, it took you 20+ years and a huge design committee to crappily emulate LISP circa 1970" |
20:22:21 | Matthias247 | Skrylar: I won't say that. But it carries a lot of historic ballast that makes it gigantic |
20:22:23 | superfunc | Though, I'd much rather use 11(14 really) over 03 spec |
20:22:35 | superfunc | A lot of what is " |
20:22:46 | superfunc | wrong" with C++ is for a good reason |
20:22:57 | superfunc | It just makes life unnecessary hellish |
20:22:59 | Skrylar | Matthias247: i will say it, and do. the compile times are absurd, and i've noticed C++ists have weird stockholm syndromes regarding their slow times |
20:23:19 | superfunc | Its my coffee breaks, I love the compile times |
20:23:25 | Matthias247 | lol |
20:23:28 | Matthias247 | me not, I hate them |
20:23:32 | superfunc | Adding boost gives me time for a second cup! |
20:23:37 | Matthias247 | hehe |
20:23:39 | Skrylar | Matthias247: i actually had hardcore C++ists telling me the reason that java et all compiles quickly is because its not optimized, and C++ compiles so slow because its fast. except they don't want to see freepascal perform the same as a C app yet compile in half the time |
20:24:03 | Araq | Skrylar: well they also built much useful software on their way to "Lisp" |
20:24:05 | superfunc | No, C++'s compilation model is fundamentally flawed |
20:24:12 | Skrylar | its the syntax |
20:24:13 | Matthias247 | yeah, my own library which inclues boost asio took 2min to compile with my last PC for only 10kloc |
20:24:14 | superfunc | and templates bork it even harder |
20:24:20 | EXetoC | tinkering with something must be a pain in the ass when the code base is large |
20:24:21 | Skrylar | textual processing + heinous syntax = slowboat |
20:24:42 | Matthias247 | on the new one with SSD it's much better, but still PITA ccompared to the 1s compile time for the c# version |
20:24:46 | Skrylar | Araq: i donno. before i started using nimrod i was working on a glorified lisp compiler |
20:25:00 | Skrylar | Araq: only reason i'm not using lisp right now is because SBCL didn't have a native compiler |
20:26:10 | superfunc | Yeah, enough about that horror. We have nim |
20:26:21 | Skrylar | nim++11 |
20:26:25 | superfunc | lol |
20:26:47 | flaviu | Is nim the new name? |
20:27:02 | Matthias247 | maybe in 2020 we will have faster compiling c++ through modules and concepts |
20:27:04 | Skrylar | its the alternate name |
20:27:22 | Araq | flaviu: might become the new official name, even |
20:27:23 | EXetoC | what's this about lisp being particularly suitable for AI? as if AI code is so distinct |
20:27:30 | Skrylar | "we decided x.y notation was too frumpy so we now require kirbies instead" -- Nim++11 Committee, 2017 |
20:27:35 | EXetoC | I know that it's an old claim though |
20:27:41 | Skrylar | EXetoC: lisp was designed in the AI era |
20:27:42 | superfunc | I was just being lazy haha |
20:27:55 | Skrylar | then people realized intelligence is f**king difficult |
20:28:02 | flaviu | I'm not opposed, that name is pretty good |
20:28:16 | EXetoC | still |
20:28:21 | superfunc | sidebar, I always thought nimble sounded amazing |
20:28:40 | Skrylar | EXetoC: though lisp is a very simple language syntactically, and its probably suitable for self-rewriting because lisp macros are almost basically that. |
20:29:15 | Skrylar | if you follow neuroplasticity, that's basically one element an AI would need; self-rewriting |
20:36:58 | Matthias247 | Sometimes I also wonder if the whole complexity of additional value-semantics in c++ in order to avoid allocations is really worth it or if reference-only semantics (like in GCed languages) which need some more dynamic allocations but less copies wouldn't result in about the same |
20:38:46 | superfunc | If it were any language but C++, I would say it wasn't worth it |
20:39:02 | flaviu | Araq: Thanks for recommending sqlite, I don't need nimrod anymore to solve this problem.:P |
20:39:35 | JehanII | Matthias247: Not sure. Do you mean abandoning value semantics entirely or just the three ring circus that C++ has to support them? |
20:39:42 | Araq | flaviu: er ... you're welcome |
20:40:15 | * | JehanII is now known as Jehan_ |
20:40:21 | Araq | Matthias247: that's the part of C++ that I really like. :-) |
20:40:44 | Matthias247 | JehanII: I think probably about the same compromise that c#/nimrod/swift are doing. primitives by value, complex types only by ref |
20:40:46 | Jehan_ | Araq: I'm not surprised, given some of Nimrod's design decisions. :) |
20:40:59 | superfunc | Also, they were kinda necessary to introduce unique_ptr |
20:41:06 | superfunc | which is probably the best thing brought into 11 |
20:41:32 | Jehan_ | Matthias247: How would you make matrix multiplications over complex numbers fast without value types? |
20:41:36 | Matthias247 | ok, you can also have by-value objects in nimrod and structs in C# |
20:42:32 | Matthias247 | Jehan_: ok, such things are really good examples for useful value types |
20:43:10 | Jehan_ | The problem with value types in C++ is the whole language complexity they introduce, i.e. interaction with constructors, destructors, temporary values, etc. |
20:43:46 | Jehan_ | I don't see any inherent problems with value types as such otherwise. |
20:43:52 | flaviu | Seems like the best solution would've been to make then second class citizens then? |
20:43:52 | Matthias247 | yes |
20:44:34 | Matthias247 | there is also some overlap between types that you put in a unique_ptr and types that you make moveable-only |
20:44:47 | Jehan_ | And arguably the problem is that C++ is a very complicated language where features tend to interact in non-trivial ways. |
20:44:54 | * | freezerburnv quit (Quit: freezerburnv) |
20:45:35 | Jehan_ | Matthias247: Well, that's a problem with reference types. |
20:45:42 | Matthias247 | the question would be: Can you decide at the design-time of a class whether it should be only used by ref or only used by value |
20:46:50 | Jehan_ | The typical use case for value types is small types that are immutable. |
20:47:28 | Matthias247 | yes |
20:47:43 | Jehan_ | Also, non-basic temporary values are another common application. |
20:48:51 | Matthias247 | in C++ you can create some types on the stack which will never work there - e.g. because they register some callback which is invoked later on |
20:49:08 | Matthias247 | There are even some types which only make sense when you put them in some kind of shared_ptr |
20:50:15 | Jehan_ | Matthias247: The problem is that value types in C++, much like many other complexities of C++, are the result of avoiding automatic memory management. |
20:54:05 | Jehan_ | E.g. how the rule of three has now arguably become the rule of five. |
20:54:42 | Matthias247 | yes. And that's so much bloat |
20:54:54 | Matthias247 | even when you have = default/delete |
20:55:05 | Araq | Jehan_: quite possible but I wonder if the whole system works decently when you add escape analysis into the mix |
20:55:22 | Araq | so destructors are only invoked when the value doesn't escape |
20:55:44 | Araq | and then you don't have the weirdness "the compiler can optimize destructor calls away" |
20:56:07 | Jehan_ | Araq: Not sure how that would fix the language complexity issue? |
20:56:16 | Jehan_ | Or are you talking about something else? |
20:58:45 | Matthias247 | hmm, couldn't apple probably optimize swift to do normal reference-counting instead of atomic-rc when the compiler can prove that an object lives only in a single thread? That in combination with the ability to optimize unnecessary retain/release calls away could make the automatic memory management close to zero overhead for many use-cases. |
21:00:01 | Jehan_ | I didn't think Objective-C did atomic reference counting at all? |
21:00:29 | Jehan_ | Admittedly, I haven't looked at the language in quite some time. |
21:01:17 | Matthias247 | it did. Up to 3 years ago or so you had to manually put retain/release calls for all objects in |
21:01:40 | Jehan_ | Yeah, but they were just non-atomic increments/decrements under the hood? |
21:01:44 | Matthias247 | and since that they introduced automatic-reference-couting which made LLVM put all required calls in for you |
21:02:15 | Jehan_ | Yeah, I know that. |
21:02:29 | Matthias247 | they are atomic. Otherwise the whole blocks things wouldn't work very reliable :) |
21:03:22 | Matthias247 | you can accesss an objective-c object from any thread. Just like a shared_ptr |
21:03:27 | Jehan_ | Ugh. |
21:03:57 | Jehan_ | Well, reasoning about lifetimes in a multi-threaded environment is pretty difficult. |
21:04:20 | Jehan_ | However, there are several techniques to deal with RC in a multi-threaded system efficiently. |
21:04:37 | Jehan_ | Like, well, deferred reference counting. :) |
21:05:02 | Jehan_ | Rather than incrementing/decrementing immediately, you write inc/dec commands to a buffer. |
21:05:06 | Jehan_ | And execute them when it's safe. |
21:05:59 | Araq | been there, done that |
21:06:09 | Araq | it's so slow that I consider it a joke |
21:06:43 | Varriount | Too bad you can't tell the OS to do such things for you, in between context switches. |
21:06:48 | Jehan_ | Araq: It shouldn't be? |
21:07:12 | Araq | Jehan_: but it is, first versions of the Gc did this |
21:07:37 | Araq | maybe I did it wrong. but I don't think so |
21:08:07 | Jehan_ | Araq: You had different increment/decrement buffers or alternatively filled it with increments from the bottom and decrements from the top? |
21:08:35 | flaviu | I wonder how sqlite handles 1977326743 elements :D |
21:08:57 | Jehan_ | You get some overhead, but it's not prohibitively slow. |
21:09:40 | Araq | doesn't matter how you do it, Jehan_ |
21:09:48 | Matthias247 | wouldn't that cause problems for weak pointers? |
21:09:55 | Araq | I had different buffers, I think |
21:09:58 | Matthias247 | becaue objects seem alive longer than they really are? |
21:10:24 | Matthias247 | and of course for immediate destructors |
21:10:27 | Jehan_ | Not really sure where the overhead should come from. Writing sequentially to a buffer isn't that expensive, and other than that, you're doing pretty much the same work. |
21:10:54 | Jehan_ | Matthias247: Yes, you don't get immediate destruction. |
21:11:13 | Araq | Jehan_: well (a) you then have the collector run more often as the buffer gets full quickly |
21:11:29 | Jehan_ | Matthias247: If you absolutely need that, you do need normal reference counting and have to optimize as many incs/decs away in the compiler as you can. |
21:11:37 | Araq | and (b) the compiler itself really performs lots of RC updates when building its data structures |
21:11:55 | Araq | I always used the compiler itself for benchmarking |
21:11:59 | Jehan_ | Araq: You don't have to run the GC everytime the buffer is full? |
21:12:19 | Araq | Jehan_: that's true but you still have to do some periodic work |
21:13:00 | Jehan_ | Technically, the only thing you have to do is to flush the buffer when it's full. |
21:13:41 | Jehan_ | I'm totally buying that extensive pointer manipulations on the heap can be a problem. |
21:14:03 | Jehan_ | And, of course, if you don't run multiple threads on the same heap, then there's no need for buffering in the first place. |
21:14:08 | Matthias247 | Jehan_: I think I could live without destructors. Thanks to garbage collected languags I'm now accustomed to calling Dispose & Co |
21:15:00 | Jehan_ | Matthias247: Destructors are generally evidence that a language doesn't support higher order functions adequately. :) |
21:15:43 | Varriount | Jehan_: What about cleanup behavior for external resources, like temp files? |
21:15:47 | flaviu | I love D's scope statement |
21:16:04 | Varriount | flaviu: Is that similar to nimrod's block statement? |
21:16:06 | EXetoC | Jehan_: yeah that's why we got them |
21:16:08 | Jehan_ | Varriount: In what specific context? |
21:16:16 | Matthias247 | if the refcount is attached to the object itself the deferring might cause caching problems. You will load every object again into cache in the phase were you actually modify the refcount |
21:16:27 | flaviu | Varriount: No, its like nim's finally construct, but more flexible |
21:16:31 | Varriount | Jehan_: Temporary files that I would like deleted off the disk when I'm done with them. |
21:16:34 | Matthias247 | Varriount: I think it's similar to Nimrods second meaning of the finally construct |
21:16:53 | Jehan_ | If you just want block scope, do "withTempFile(file, "pattern"): … or something. |
21:17:31 | Varriount | Jehan_: Yeah, but that means that the file handle can't escape the scope. |
21:17:52 | EXetoC | scope(fail) { ... } to execute block after a raised exception and scope(exit) { ... } to execute something regardless IIRC |
21:17:59 | Jehan_ | Varriount: If it can, then lifetime becomes a non-trivial problem anyway. |
21:18:00 | EXetoC | scope(success) too |
21:18:16 | flaviu | EXetoC: You remembered correctly |
21:18:44 | * | onr quit (Quit: onr) |
21:18:46 | EXetoC | pretty neat indeed. we could've been implementing so many things had it just been possible to go up the AST :p |
21:18:59 | Matthias247 | Jehan_: that would be equivalent to C#'s using and Javas new try-with-resources. But I don't like the addtional nesting level ;) |
21:19:16 | * | Demos joined #nimrod |
21:19:32 | Araq | Jehan_: also things like lookatitagainlater[i++] = obj look really bad for caches |
21:19:33 | EXetoC | dangerous indeed, but people sure would have fun with that |
21:19:46 | Jehan_ | Matthias247: But you do like writing artificial types just so the destructor gets invoked when the scope is exited? :) |
21:20:24 | Matthias247 | Jehan_: no ;) |
21:20:26 | Jehan_ | Araq: Umm, why? Sequential writing to memory is something that caches expect. |
21:20:43 | Araq | because it's cached but shouldn't |
21:21:08 | Jehan_ | Araq: Not sure I'm following you? |
21:21:34 | Matthias247 | if you store all refcounts together at a central location and not attached to the objects it could be faster |
21:22:29 | Jehan_ | Matthias247: For shared objects, you can also optimize the speed at the cost of extra memory by using basically the SNZI approach. |
21:22:43 | Jehan_ | Especially since leaf operations do not have to be atomic in this case. |
21:22:51 | Jehan_ | If you go about it smartly. |
21:23:12 | Araq | Jehan_: nobody knows SNZI |
21:23:25 | Varriount | What is SNZI? |
21:23:35 | Jehan_ | Scalable Non-Zero Indicator. |
21:23:42 | Araq | Varriount: hardcore stuff |
21:23:49 | Matthias247 | just heard the first time about it |
21:24:00 | Jehan_ | Basically a shared RC that avoids writing to a single memory location too much. |
21:24:23 | Araq | it's really hardcore... I didnt get the paper |
21:24:24 | Varriount | 'shared' meaning, shared by threads? |
21:24:30 | Jehan_ | Not a full RC, only supports increment, decrement, and isZero operations. |
21:24:35 | Jehan_ | Araq: It's actually pretty simple. |
21:24:48 | Jehan_ | You have one RC, and N sub-RCs. |
21:25:24 | Jehan_ | When you increment, you pick one of the sub-RCs. If it goes from 0 to 1, you increment the main RC, too, if it goes from 1 to 0, you decrement the main RC. |
21:25:45 | Jehan_ | The tricky part is doing that atomically, which is where much of the complexity comes from. |
21:25:50 | Jehan_ | But the basic concept is very simple. |
21:26:07 | Varriount | Jehan_: Couldn't N be the number of threads? |
21:26:12 | Jehan_ | Though obscured, because they keep talking about a tree of RCs, when the tree really only has a root and one layer below that. |
21:26:22 | Jehan_ | Varriount: That's basically what I'm saying. |
21:26:37 | Jehan_ | You could have one per thread, do the inc/dec on that non-atomically. |
21:26:50 | Matthias247 | one per core would probably better ;) |
21:26:51 | Jehan_ | And increment/decrement the shared RC only when you reach/leave zero. |
21:27:01 | Matthias247 | threads come and go |
21:27:20 | Jehan_ | Matthias247: Yes, but then you need atomic increments for the sub-RCs. |
21:27:26 | Matthias247 | but if you have a 32core machine then say goodbye to your memory :-) |
21:27:31 | Araq | ah lol that's what i'm working on ... kind of |
21:27:31 | Jehan_ | With one per thread, you don't. |
21:27:45 | Varriount | Araq: I thought you were working on spawn? |
21:28:31 | Araq | actually I'm working on fixing lambda lifting for nested procs |
21:28:39 | Matthias247 | Jehan_: no, you don't when the threads use the RC of the core they are currently runnning on |
21:28:58 | Varriount | Jehan_: I'm guessing that adding locks around refcounters is why a truly multithreaded cpython implementation is so slow? |
21:29:36 | flaviu | Varriount: From what I understand, only one thread can execute at any one time in cpython |
21:29:45 | flaviu | They call it the Global Interpreter Lock |
21:29:56 | Jehan_ | Matthias247: Context switches can occur during an RC. |
21:30:00 | Varriount | flaviu: Yes, I'm very well aware of it. |
21:30:04 | Jehan_ | … during an RC operation. |
21:30:11 | Jehan_ | Yes, you do need them to be atomic even then. |
21:30:25 | Matthias247 | ah, that's true |
21:30:30 | Jehan_ | Varriount: Yes. The RC in Python is the main problem for the GIL. |
21:30:47 | Araq | Varriount: 'spawn' requires some form of automatic shared memory support |
21:30:53 | Jehan_ | s/problem/reason/ |
21:31:18 | Matthias247 | apparently swift also has some performance problems due to the RC: http://stackoverflow.com/questions/24101718/swift-performance-sorting-arrays |
21:31:23 | Varriount | flaviu: A test implementation of fine grained locking, which did away with the GIL, was attempted for CPython some time ago. |
21:31:31 | Jehan_ | Matthias247: Yeah, I saw that, but looks like a bug to me. |
21:31:48 | Matthias247 | Jehan_: I also think so. Apple will fix it |
21:31:53 | Varriount | flaviu: The problem was that it slowed down python quite a bit, due to all the locking mechanisms involved. |
21:31:54 | Jehan_ | Like they're doing unnecessary heap operations. |
21:32:22 | flaviu | http://i.stack.imgur.com/wcdXk.jpg pfft :P |
21:32:32 | Varriount | (And believe me, no one on the python-dev mailing list will accept python becoming any slower) |
21:33:17 | Jehan_ | Varriount: Heh. :) |
21:33:43 | Araq | Varriount: is that why they enforced unicode strings everywhere? :P |
21:34:20 | Varriount | Araq: Actually, the strings aren't unicode until a unicode character is introduced. |
21:34:42 | Araq | Varriount: yes, but that's a recent optimization |
21:35:01 | Araq | python 3 was introduced and was much slower than pyhon 2.6 |
21:35:34 | Varriount | Araq: That was a special case. |
21:35:46 | Jehan_ | I'm still not sure what the point of some of the compatibility breaks was. |
21:36:12 | Jehan_ | One of my main use cases for Python has always been to have a somewhat higher level alternative to /bin/sh |
21:36:23 | Varriount | "Special" meaning that the web application community (which forms a surprisingly large part of the community) was threatening to beat the developers over the head with the developer's own arms. |
21:36:27 | Jehan_ | Since it's pretty much on every system where you have /bin/sh |
21:36:48 | Jehan_ | But now that I have two versions to deal with, that benefit pretty much goes out of the window. |
21:37:20 | Araq | koch.nim used to be koch.py |
21:37:27 | Varriount | Jehan_: Which is why it's going to take another 10 years for the community to migrate. |
21:37:38 | Araq | I ported it to python 3, didn't work anymore |
21:37:39 | Varriount | *migrate to python 3 |
21:37:54 | Varriount | Araq: Then you didn't port it correctly. :3 |
21:38:09 | Jehan_ | These days, I basically have an extra wrapper that makes sure I'm running Python 2. |
21:38:14 | Araq | then I ported it to nimrod and was happy |
21:39:07 | EXetoC | I gotta admit, you do seem to know your way around nimrod, so I'm not surprised :p |
21:39:08 | Varriount | Jehan_: I've renamed my python binaries to 'python27' and 'python33' |
21:39:36 | Jehan_ | Varriount: Well, the standard is that there should be python2 and python3 binaries if you have both. |
21:40:12 | Jehan_ | This way, configuration scripts can check. |
21:40:47 | Araq | I still want my script compiler |
21:40:52 | Jehan_ | #!/bin/sh |
21:40:52 | Jehan_ | if which python2 >/dev/null; then |
21:40:52 | Jehan_ | exec python2 "$@" |
21:40:52 | Jehan_ | else |
21:40:52 | Jehan_ | exec python "$@" |
21:40:53 | Jehan_ | fi |
21:41:02 | Araq | compile nimrod to .bat or .sh ... |
21:41:19 | Jehan_ | Araq: This sounds like it would be painful. |
21:42:30 | Araq | the pain is justified though, it would be quite useful |
21:42:40 | Jehan_ | Oh, of course. |
21:42:54 | Jehan_ | But, a /bin/sh backend? |
21:42:58 | Varriount | Jehan_: On Windows, whether sh will work depend on whether Linus Torvalds got up on the right side of the bed that day. |
21:43:21 | Jehan_ | Varriount: Windows already needs special treatment, so I don't care. |
21:43:56 | Varriount | Jehan_: And just like that, you exclude more than half of a prospective userbase. |
21:43:57 | Jehan_ | If /bin/sh doesn't work, then the library configure scripts I'm running won't work, either. |
21:44:27 | * | io2 quit (Quit: ...take irc away, what are you? genius, billionaire, playboy, philanthropist) |
21:44:27 | Jehan_ | Varriount: No, I'm talking about special-casing the Windows situation. |
21:45:29 | Jehan_ | Also, not sure what Linus Torvalds has to do with Cygwin etc. :) |
21:45:51 | Araq | Jehan_: a sh backend is a very good use case to improve parts of nimrod's architecture |
21:46:51 | Jehan_ | Araq: What would "x mod 15" compile to? |
21:47:08 | Araq | that's exactly what I mean |
21:47:48 | Jehan_ | Unary representation + sed? :) |
21:47:53 | Araq | it wouldn't compile, plain and simple (in version 1 of the sh backend) |
21:48:05 | Jehan_ | I think I'd target awk first ... |
21:48:15 | Araq | the backend selects a custom system.nim |
21:48:27 | Araq | and special cases things like os.copyFile |
21:48:49 | Araq | everything that is not supported produces a clean error message |
21:50:11 | Jehan_ | I understand that. The question is how you'd express essential primitives in /bin/sh. |
21:50:45 | Varriount | Jehan_: Run an external calc binary? |
21:50:48 | * | noam joined #nimrod |
21:51:01 | Araq | Jehan_: that's the wrong question :-) |
21:51:02 | Jehan_ | Varriount: That'd kinda defeat the purpose of targeting /bin/sh? |
21:51:25 | Varriount | Isn't that the famed Unix/Linux way? Making large programs composed of smaller programs? |
21:51:49 | Varriount | Jehan_: Well, someone wrote an assembler in pure bash, so it's probably possibly. |
21:51:53 | Jehan_ | Varriount: Yes. And you could call expr or awk. |
21:51:53 | Varriount | *possible |
21:52:10 | Jehan_ | But … then there's really little reason not to use, say, awk directly. |
21:52:21 | * | springbok_ joined #nimrod |
21:52:30 | Araq | the point is to support what bin/sh is good at but use nimrod's nicer syntax and static typing |
21:52:42 | Jehan_ | Araq: Hmm. |
21:53:04 | Jehan_ | I think I see. |
21:53:38 | Jehan_ | But in that case, I'd just use Python, to be honest. |
21:53:54 | Jehan_ | The benefit of writing sh scripts is that they run anyway. |
21:53:59 | Jehan_ | anywhere* |
21:54:04 | Araq | well our installers use shell scripts, not python, for a reason |
21:54:10 | Jehan_ | Anywhere where there's POSIX, at least. |
21:54:15 | Araq | exactly |
21:56:42 | * | Trustable quit (Quit: Leaving) |
21:58:29 | * | Jesin joined #nimrod |
22:07:57 | Varriount | Hm. Does anyone do C++ compilation as a service? |
22:09:08 | Araq | /dev/null is a service |
22:09:19 | flaviu | http://compileonline.com/ https://ideone.com/ http://codepad.org/ https://compilr.com/ |
22:09:30 | flaviu | Varriount: I can get more if you want |
22:09:43 | flaviu | https://www.google.com/search?q=online+compiler is the best place to find them |
22:09:44 | flaviu | https://www.google.com/search?q=online+compiler |
22:11:32 | Varriount | Araq: I was just thinking, you could offer Nimrod Compilation as a Service, since Nimrod is so hard to comp- I mean, because it takes so lon- I mean... because you can? |
22:12:05 | * | raleigh quit (Quit: Leaving) |
22:12:44 | Araq | Varriount: now you know why the gnu free sofware guys can sell support and we can't :P |
22:13:11 | Araq | selling support works much better when your product is crap |
22:14:42 | Araq | win8's task manager is superb |
22:15:00 | Araq | it even tells you what's in "autostart" |
22:15:03 | flaviu | Windows 8 is all-around awesome. |
22:15:08 | Varriount | Araq: Yes, it's one of the few things Win8 got right. |
22:15:25 | * | Varriount throws flaviu into the Metro interface |
22:15:30 | Araq | flaviu: not sure if you're serious |
22:16:05 | flaviu | I am, I don't even mind the metro UI |
22:16:26 | Varriount | My greatest annoyance is the fact that Microsoft decided to restrict the shadow backup service (background file versioning service) to only work when you pair it with external storage. |
22:16:27 | flaviu | But all I use is the desktop and search |
22:17:13 | flaviu | Luckily, I don't have anything important on widows, so I don't care if I loose data. |
22:18:01 | Varriount | flaviu: It was really handy when you accidentally wrote over a file, and then wanted a previous copy of it. |
22:19:13 | Matthias247 | the task manager, the explorer and the flat decoration are the best improvements of win8 ;) |
22:19:44 | Matthias247 | ah, and in the meantime I also enjoy using win+q for starting apps |
22:22:14 | superfunc | I enjoyed my brief time with win8 via my brothers surface |
22:23:44 | * | skyfex joined #nimrod |
22:24:09 | Araq | yay skyfex is back! |
22:24:25 | dom96 | I agree with flaviu. Windows 8 is awesome. |
22:25:07 | flaviu | Wow, some people actually like windows 8. Wonder why they never come up on reddit. |
22:26:03 | dom96 | People are more likely to voice their complaints. |
22:27:14 | skyfex | :) |
22:27:40 | * | Matthias247 quit (Read error: Connection reset by peer) |
22:27:45 | Araq | the reddit people are too busy fixing space leaks in their haskell programs :P |
22:28:05 | dom96 | what |
22:28:23 | dom96 | Not once did I hear that being a problem with Haskell. |
22:28:32 | flaviu | dom96: Thunk buildup |
22:28:51 | superfunc | people on reddit are too busy writing circle-jerk articles |
22:29:43 | Jehan_ | There are plenty of things to like about Haskell. But … it can't leap tall buildings in a single bound, either. :) |
22:29:47 | Varriount | flaviu: Can you explain (about thunk buildup) |
22:30:00 | springbok_ | The only place that's a worse echo chamber than Reddit is Hacker News. |
22:30:14 | Jehan_ | "I have discovered functional programming and static typing. Now I know that Fred Brooks was wrong." |
22:30:47 | springbok_ | jehan_: lol |
22:30:50 | dom96 | Araq: is my article embargo lifted? |
22:30:52 | Jehan_ | But, honestly, I think any serious computer scientist should learn Haskell. |
22:31:26 | Jehan_ | Not necessarily for use in production, but to understand at least the concepts. |
22:31:28 | flaviu | Varriount: Not too sure I remember correctly, but I recall in some situations closures get allocated on the heap and only after all have been allocated do they get executed |
22:31:33 | * | skyfex quit (Ping timeout: 240 seconds) |
22:32:58 | EXetoC | dom96: so one's own privmsg's aren't received? |
22:33:17 | * | skyfex joined #nimrod |
22:33:18 | Varriount | dom96: Article embargo? |
22:33:49 | dom96 | Varriount: he wanted me to wait before I release my new blog article about async |
22:34:11 | Araq | dom96: well give me a chance to fix nested closures |
22:34:13 | springbok_ | Sigh...I suppose I'm not a serious computer scientist. All we have was ML, Miranda & Hope. |
22:34:15 | Varriount | Jehan_: I hear so many good things about haskell, but rarely do I hear any bad things (other than 'it's hard to understand') |
22:34:21 | dom96 | Araq: ok |
22:34:26 | EXetoC | I think they should be, but it doesn't matter too much |
22:34:34 | springbok_ | s/have/had/ |
22:34:54 | * | exetest joined #nimrod |
22:35:00 | Jehan_ | Varriount: Biggest problems are: |
22:35:18 | Jehan_ | (1) It is really, really difficult to predict runtime or memory usage of a Haskell program. |
22:35:56 | Jehan_ | (2) Lack of destructive updates means that you're likely going to use performance in some pretty common scenarios. |
22:36:16 | Araq | *to lose ? |
22:36:19 | Jehan_ | lose performance* |
22:37:16 | Jehan_ | You can work around (2), but essentially in this case you're constructing a cumbersome imperative subsystem. |
22:38:17 | dom96 | Interesting. Much like Varriount I have not heard many bad things about Haskell, certainly not this. |
22:38:21 | Jehan_ | Extensive use of state/IO monads has all the downsides of using an imperative programming language, minus the actual language features of a well-designed imperative language to mitigate them. |
22:38:43 | * | skyfex quit (Ping timeout: 240 seconds) |
22:38:53 | exetest | alright |
22:39:09 | Jehan_ | One of the more interesting attempts to deal with that problem is Disciple. |
22:39:17 | Varriount | I curse the optimistic fool that decided writing the YAML reference implementation in Haskell was a good idea. |
22:39:21 | dom96 | When using Haskell I found that I was trying to work imperatively too much. Monads scared me and I spent a lot of time trying to avoid them :P |
22:39:23 | Jehan_ | Which is basically Haskell + an effects system to allow for destructive updates. |
22:39:50 | flaviu | Jehan_: Like nimrod, but with the `func`working? |
22:39:51 | Jehan_ | Speaking of which, the author of Disciple wrote a great dissertation largely about that problem. |
22:39:59 | dom96 | Varriount: Be thankful he didn't decide to use brainfuck. |
22:40:03 | Araq | I think nimrod's effect system is very natural and haskell's monads are a weird way to solve the problem |
22:40:13 | Varriount | Not that I think Haskell is bad (I don't have enough knowledge), however the language is so unlike other languages that it makes it quite hard to write other complete reference implementations. |
22:40:33 | Jehan_ | dom96: Monads are actually a pretty simple concept that keeps being explained poorly. |
22:41:14 | Araq | monads are simple, yes |
22:41:50 | Jehan_ | At a very high level, it's about wrapping types with additional information and then applying functions to the wrapped type (this explanation will make type theorists cringe, but it'll probably explain the idea better than the "correct" write-ups you can find). |
22:41:51 | flaviu | dom96: You have a box, that is your monad. It can have stuff inside, but you don't care and can't find out. You can tell the box to do stuff, which it'll do, but only if the box isn't empty. |
22:42:25 | Jehan_ | Simple example: The Option/Maybe monad. |
22:43:18 | Jehan_ | This is just a wrapper around an existing type, Some(value) or None. |
22:43:43 | dom96 | Yeah. I understand Option/Maybe. Didn't realise they were all so similar? |
22:43:44 | Jehan_ | Applying a function f means turning Some(value) into Some(f(value)) and None into None. |
22:43:50 | Jehan_ | Not really similar. |
22:44:16 | Jehan_ | It differs in (1) How you augment the values through wrapper types and (2) the rules how functions are applied. |
22:46:11 | Jehan_ | The idea behind the State and IO monad is that you can enforce that functions are evaluated in a specific order by composing them. |
22:46:50 | Jehan_ | Here's the thing, though. |
22:47:00 | Jehan_ | Thinking in monads doesn't necessarily help you. |
22:47:37 | * | Varriount|Mobile joined #nimrod |
22:47:50 | Araq | IO Option Int <-> Option IO Int # IMHO this is what's wrong with it |
22:47:57 | Jehan_ | As soon as you have algebraic data types or something similar and use higher order functions, you get them sort of as a side effect. |
22:48:27 | Araq | IO is an *effect* and shouldn't be encoded via the return type |
22:48:39 | Araq | else you get the above problem |
22:49:25 | Jehan_ | Another general problem with pure functional languages is something that Peter Van Roy (of Oz fame) pointed out a while ago. |
22:49:53 | Jehan_ | You can lose modularity (information hiding) because you have to pass ALL data as a parameter. |
22:50:22 | Jehan_ | A common use case is caching. |
22:50:39 | Araq | Jehan_: yes but pretty much everything is anti modular |
22:50:46 | Araq | static typing is |
22:50:48 | Jehan_ | Imperative and impure functional languages can hide a cache. Pure functional problems have an issue there. |
22:50:58 | Araq | but no, dynamic typing is ... |
22:51:24 | Araq | modularity is not well defined afaik |
22:51:40 | Jehan_ | Araq: I mean anti-modular in the very specific sense that you have to expose information that you'd like to abstract away. |
22:52:10 | flaviu | Jehan_: Why? Isn't monitization a big deal in functional programming? |
22:52:10 | Jehan_ | Implicit parameters in Haskell can mitigate that to some extent, but they're no panacea. |
22:52:24 | Araq | well but then what you like to abstract away can break your thread safety |
22:52:31 | Jehan_ | Monitization? |
22:52:56 | Araq | so I'm not sure it's a good example |
22:53:20 | flaviu | Jehan_: I can't spell, and I can't get spellcheck to figure out my horrible spelling there. |
22:53:25 | Jehan_ | Araq: Doesn't break thread-safety if you use monitors. |
22:53:37 | flaviu | Jehan_: memoize |
22:54:32 | Jehan_ | flaviu: Doesn't change the underlying problem. |
22:54:41 | Araq | Jehan_: true but then the type system should hide thread-safe caches but not non-thread-safe caches ... would be a very useful feature indeed |
22:55:04 | Jehan_ | Either you limit the lifetime of the cache to the scope of the function or you expose it. |
22:55:26 | * | nande joined #nimrod |
22:55:29 | Jehan_ | Araq: Actually, I designed something like that a while ago. |
22:55:42 | Jehan_ | I'm still trying to figure out if it's worthwhile actually implementing it. |
22:55:53 | Araq | well ... I'm not sure I like it. caches have other problems too |
22:56:05 | Jehan_ | But you could do just that. |
22:56:45 | Araq | Jehan_: we do something similar with the upcoming "gcsafe" effect |
22:57:12 | Jehan_ | Araq: I'm talking about something different. |
22:57:24 | Araq | well ok, not really, but yes, we can do that easily |
22:57:34 | Jehan_ | A variant on http://dl.acm.org/citation.cfm?id=1297042 |
22:58:15 | Jehan_ | Adapted for regular critical regions instead of STM. |
23:00:48 | * | saml_ joined #nimrod |
23:09:53 | * | Jehan_ quit (Quit: Leaving) |
23:10:37 | * | saml_ is now known as saml |
23:13:53 | * | nande quit (Remote host closed the connection) |
23:17:05 | * | exetest quit (Remote host closed the connection) |
23:18:38 | OrionPK | latecomer: have to agree, windows 8 is a big step up from windows 7 |
23:18:51 | OrionPK | the metro UI is great for tablets, but I pretty much ignore it on my desktop |
23:19:02 | flaviu | More screen space for search too |
23:19:20 | Araq | I think the ui is primitive and archaic but at least it doesn't get in my way |
23:20:01 | * | nande joined #nimrod |
23:20:58 | dom96 | It's pretty much a 2-in-1 OS. The metro UI you can use with touch screens and the desktop you can use with a mouse. |
23:21:22 | Varriount | How does Haskell do dynamic allocations? Or is it so sciency that it doesn't do dynamic memory? |
23:21:25 | dom96 | I think Microsoft optimised it for computers with both a mouse and a touch screen. |
23:21:49 | Varriount | dom96: Nope. I have a laptop with a touchscreen, and metro isn't that helpful. |
23:21:53 | dom96 | I can see people coming from older versions of Windows getting really confused though. |
23:22:05 | dom96 | Especially with Windows 8 which doesn't have a start button. |
23:22:16 | Varriount | dom96: Look at 8.1 |
23:22:16 | flaviu | Varriount: I'd assume normal GC, like everything else |
23:22:24 | Araq | Varriount: pretty much eveything goes on the heap, like most (all?) functional languages do |
23:22:29 | dom96 | Varriount: Yes, I know about 8.1. |
23:22:35 | Varriount | flaviu: Then.. what kind of GC does Haskell use? |
23:22:37 | dom96 | And that is what I am using. |
23:22:48 | Varriount | Does it handle cycles? How responsive is it? |
23:23:15 | Araq | Varriount: stop the world concurrent copying collector I think |
23:23:20 | flaviu | Varriount: Copying Generational |
23:23:24 | * | darkf joined #nimrod |
23:24:35 | Varriount | How do you have a stop the world collector that is also concurrent? |
23:24:55 | flaviu | It isn't stop-the-world, its concurrent |
23:24:56 | Varriount | Isn't that a contradiction of terms? |
23:25:51 | flaviu | It appears that there is a branch for a concurrent GC |
23:25:55 | flaviu | http://stackoverflow.com/questions/15236238/current-state-of-haskell-soft-real-time |
23:27:06 | Araq | Varriount: most "concurrent" GC's still have minor stop the world phases |
23:27:53 | dom96 | You can certainly stop the execution of the threads, and instead run the GC concurrently I think. |
23:28:16 | Araq | dom96: that'a a "parallel" collector then |
23:28:31 | Araq | concurrent means "concurrent wrt the mutator" |
23:28:38 | dom96 | the mutator? |
23:30:42 | * | Varriount|Mobile quit (Quit: AndroIRC - Android IRC Client ( http://www.androirc.com )) |
23:33:31 | Araq | mutator = threads that run your program |
23:33:43 | Araq | as opposed to running the gc |
23:35:13 | dom96 | ahh |
23:37:50 | dom96 | It seems that inputStream for processes doesn't work on Windows. |
23:40:18 | * | exetest joined #nimrod |
23:48:47 | * | zshazz_ joined #nimrod |
23:49:20 | Araq | hmm I tested this once |
23:49:25 | dom96 | good night |
23:49:37 | dom96 | see my latest bug report |
23:51:57 | EXetoC | dom96: bye |
23:52:12 | Araq | same here, good night |
23:52:26 | EXetoC | dom96: am I not supposed to get MPrivMsg where I am the origin? |
23:52:34 | NimBot | nimrod-code/packages master f188909 Grzegorz Adam Hankiewicz [+0 ±1 -0]: Adds midnight_dynamite module. |
23:52:34 | NimBot | nimrod-code/packages master 8350900 Dominik Picheta [+0 ±1 -0]: Merge pull request #63 from gradha/pr_midnight_dynamite... 2 more lines |
23:52:47 | * | zshazz quit (Ping timeout: 252 seconds) |
23:52:59 | dom96 | EXetoC: Depends on the message. |
23:53:38 | EXetoC | dom96: if I send something to a channel |
23:53:46 | dom96 | EXetoC: Then the channel is the origin. |
23:53:57 | dom96 | origin is what you should use when sending a message back |
23:54:01 | dom96 | it's a convenience. |
23:54:06 | EXetoC | ok not the origin then |
23:54:21 | dom96 | it is technically :P |
23:54:28 | dom96 | the channel is the origin |
23:54:45 | EXetoC | didn't mean that. nevermind |
23:55:13 | flaviu | I assume there isn't a built-in way to intern strings? |
23:55:16 | EXetoC | s/origin/nick |
23:55:32 | dom96 | it's only the nick if you are sending a message directly to the client |
23:55:42 | dom96 | i.e. if the destination is its nick |
23:56:05 | dom96 | The use case is: irc.send(msg.origin, "pong") |
23:56:41 | dom96 | using msg.params[x] (where x is the position where the channel name is) will not work |
23:57:05 | dom96 | because when you receive a PM to you, then that param will be your own nick |
23:57:32 | * | exetest quit (Remote host closed the connection) |
23:57:36 | EXetoC | right |
23:57:47 | dom96 | This is like my 4th IRC module so I like to think that it's well thought out ;) |
23:58:11 | Varriount | Where is the IRC protocol even documented? |
23:58:31 | EXetoC | rfc 1459 and some other rfc |
23:58:44 | dom96 | https://tools.ietf.org/html/rfc2812 |
23:59:25 | dom96 | It's a nice protocol. |
23:59:30 | Roin | it is easy |
23:59:38 | * | Roin phaeses out again |
23:59:52 | dom96 | hehe, hi Roin |
23:59:57 | Roin | Heya dom96 o/ |