01:03:39 | * | BitPuffin joined #nimrod |
01:11:51 | * | DAddYE quit (Remote host closed the connection) |
01:38:04 | * | q66 quit (Quit: Leaving) |
02:25:37 | * | BitPuffin quit (Ping timeout: 256 seconds) |
03:24:50 | reactormonk | lib/system.nim(1796, 23) Error: cannot open 'panicoverride' |
03:24:52 | reactormonk | hmm |
03:25:36 | reactormonk | Araq, forgot to check that in? |
03:52:15 | * | Associat0r joined #nimrod |
03:52:15 | * | Associat0r quit (Changing host) |
03:52:15 | * | Associat0r joined #nimrod |
05:55:08 | * | BitPuffin joined #nimrod |
05:55:32 | * | OrionPK quit (Read error: Connection reset by peer) |
06:01:18 | * | BitPuffin quit (Ping timeout: 264 seconds) |
07:49:23 | * | exhu joined #nimrod |
07:50:05 | exhu | i regenerated csources and build/nimbase.h and compiler/mapping.txt have changed, is it normal? what are those files for? |
08:13:11 | Araq | reactormonk: nope, you need to provide it, look at the example in tests/manyloc/standalone |
08:13:35 | Araq | exhu: it's normal and mapping.txt is used by niminst to determine which C files exist |
08:15:41 | exhu | i've filed a pull request |
08:15:59 | exhu | without those files |
08:16:59 | exhu | don't know why previous compilers built ok, because now on both machines they don't link. |
08:17:23 | exhu | but on mac it did link ok (clang) |
08:19:24 | Araq | what's "both machines"? windows? |
08:46:37 | Araq | bbl |
09:01:05 | * | q66 joined #nimrod |
09:20:34 | exhu | ubuntu 13.04 64-bit gcc |
11:11:13 | * | exhu quit (Quit: Ex-Chat) |
11:27:29 | * | Trix[a]r_za is now known as Trixar_za |
11:35:07 | * | EXetoC joined #nimrod |
12:21:01 | * | BitPuffin joined #nimrod |
12:22:34 | * | Trixar_za is now known as Trix[a]r_za |
12:56:45 | BitPuffin | Araq: shouldn't the competetive concurrency model thing you're working on be added to the roadmap on the website? |
13:15:53 | * | gradha joined #nimrod |
13:49:37 | * | BitPuffin quit (Ping timeout: 276 seconds) |
13:53:19 | * | fowl quit (Ping timeout: 260 seconds) |
14:05:45 | * | fowl joined #nimrod |
14:14:01 | EXetoC | fowl: yo |
14:15:41 | dom96 | yo yo |
14:21:55 | EXetoC | lo |
14:37:09 | gradha | nihathrael: I believed yesterday to have replicated the bug you had but it turns out the problem was a stale/broken nimcache, after deleting the directory I could not reproduce it any more |
14:39:22 | * | apotheon quit (Ping timeout: 246 seconds) |
14:40:52 | * | apotheon joined #nimrod |
14:40:52 | * | apotheon quit (Changing host) |
14:40:52 | * | apotheon joined #nimrod |
14:41:06 | gradha | nihathrael: wait, I got it back, the problem is using suggest without a nimcache, could you try modifying your code to start the server and perform a "compile" command before querying for suggestions? |
14:44:38 | gradha | also I get to run a caas test succesfully once and it fails on the next run if I don't remove nimcache previously |
14:57:19 | * | zahary_____ joined #nimrod |
14:57:19 | * | zahary_____ is now known as zahary_ |
15:21:56 | * | fowl quit (Quit: EliteBNC free bnc service - http://elitebnc.org) |
15:36:27 | reactormonk | Araq, kk |
15:38:18 | * | fowl joined #nimrod |
15:49:03 | * | BitPuffin joined #nimrod |
16:10:36 | EXetoC | BitPuffin: have you written any Nimrod code recently? |
16:11:04 | BitPuffin | EXetoC: I'm still plowing through the manual |
16:11:12 | BitPuffin | But I'm starting a project on monday |
16:11:48 | gradha | monday? sounds like serious stuff |
16:11:54 | EXetoC | ok |
16:13:21 | BitPuffin | gradha: why haha |
16:13:56 | EXetoC | maybe he thinks you will get monies for it |
16:13:56 | gradha | jobs tend to start on mondays, are you doing something for work in nimrod? |
16:15:36 | EXetoC | I doubt it :p but that would be awesome |
16:16:39 | BitPuffin | gradha: well it's not so much for work, but I'm gonna be more disciplined with working on my own projects and doing freelance stuff, so while I'm applying and waiting for responses etc, I'll work on my personal project that is serious |
16:17:29 | gradha | I used to think that way (being disciplined) until I bought xcom for ios two weeks ago and have hardly made anything IRL |
16:19:26 | BitPuffin | haha :D |
16:43:11 | EXetoC | I have no discipline whatsoever. I wonder if the lack of it is considered a medical condition |
16:46:22 | EXetoC | happy coding |
16:46:26 | * | EXetoC is now known as EXetoC_ |
16:47:10 | BitPuffin | heh well me neither |
16:47:18 | BitPuffin | but I'm attempting to aquire it |
16:47:20 | BitPuffin | gonna get up earlier |
16:47:23 | BitPuffin | and just work |
17:04:24 | * | OrionPK joined #nimrod |
17:24:30 | * | BitPuffin quit (Ping timeout: 268 seconds) |
18:06:45 | * | apotheon quit (Ping timeout: 264 seconds) |
18:07:20 | * | apotheon joined #nimrod |
18:07:20 | * | apotheon quit (Changing host) |
18:07:20 | * | apotheon joined #nimrod |
18:13:50 | * | apotheon quit (Ping timeout: 268 seconds) |
18:19:50 | * | apotheon joined #nimrod |
18:19:50 | * | apotheon quit (Changing host) |
18:19:50 | * | apotheon joined #nimrod |
18:24:09 | * | EXetoC_ is now known as EXetoC |
19:03:40 | reactormonk | gradha, sweet stuff |
19:13:01 | * | alexandrus joined #nimrod |
19:14:21 | * | q66 quit (Remote host closed the connection) |
19:14:50 | * | q66 joined #nimrod |
19:19:16 | gradha | you are eating something good? |
19:19:41 | alexandrus | moin |
19:20:11 | gradha | hi alexandrus |
19:20:15 | alexandrus | how are you? |
19:20:28 | gradha | debating on what to do next: play or play |
19:20:29 | reactormonk | gradha, the idetools |
19:20:51 | alexandrus | hmm, one could also develop something xd |
19:20:57 | gradha | reactormonk: it's all just text nobody reads anyway |
19:21:08 | gradha | reactormonk: more impressive would be actually fixing idetools |
19:22:03 | reactormonk | gradha, I read that kind of text ;-) |
19:22:13 | reactormonk | gradha, got a bunch of failing tests? |
19:22:53 | gradha | most of the tests fail for me, either in proc mode or caas |
20:07:20 | Araq | hmm tomorrow is Sunday ... maybe I should have bought some food ... |
20:07:55 | alexandrus | so tomorrow is a diet day |
20:11:02 | EXetoC | yes the best diet ever |
20:11:11 | Araq | meh .. there are always gas stations |
20:12:06 | * | gradha quit (Quit: bbl, need to watch https://www.youtube.com/watch?v=1ZZC82dgJr8 again) |
20:20:16 | EXetoC | no food stores open on sundays? |
20:20:39 | Araq | not in germany, no |
20:20:46 | EXetoC | oh man |
20:21:33 | EXetoC | alexandrus: eating is the main focus of proper diets you know :p |
20:21:47 | alexandrus | really? xD |
20:37:56 | EXetoC | yeah, just avoid the bad stuff |
20:38:57 | OrionPK | glad I live in a secular country |
20:39:08 | OrionPK | where they just shut down liquor stores on sunday >:( |
20:39:17 | OrionPK | (also car dealerships) |
20:45:29 | EXetoC | I didn't know that the term 'secular' implied that as well :p |
20:48:29 | * | exhu joined #nimrod |
20:49:08 | exhu | i feel like a seasoned tester here -) |
20:50:26 | exhu | either the compiler doesn't build or internal error or segmentation fault in generated program |
20:57:45 | * | exhu quit (Quit: Ex-Chat) |
21:06:37 | Araq | exhu, come back! perhaps I can help you |
21:14:25 | * | OrionPKM joined #nimrod |
21:45:44 | reactormonk | what's the borrow pragma? |
21:46:05 | Araq | it steals an implementation for a 'distinct' type |
21:55:30 | * | BitPuffin joined #nimrod |
22:10:18 | * | Sergio965 joined #nimrod |
22:12:03 | Araq | BitPuffin: the roadmap says "a shared memory garbage collected heap will be provided" |
22:12:50 | BitPuffin | Araq: and that's the competetive thingy? |
22:13:31 | Araq | no ... ;-) |
22:13:54 | BitPuffin | well then what about it lol! |
22:14:13 | Araq | well it's about concurrency, so there :P |
22:14:43 | Araq | also the homepage is more about what we've accomplished rather than about features we hope to get some day |
22:14:43 | BitPuffin | haha, you mean because the "shared memory" part hehe |
22:19:35 | * | OrionPKM quit (Remote host closed the connection) |
22:31:48 | BitPuffin | Araq: I'm trying to find some info about what competetive concurrency actually is, any recomendation? |
22:32:02 | BitPuffin | competitive |
22:33:10 | alexandrus | one could imagine so many things x |
22:34:07 | Araq | well I made up that term |
22:34:12 | BitPuffin | oh |
22:34:19 | BitPuffin | so that's why there is no papers about it |
22:34:30 | EXetoC | concurrency that doesn't suck I imagine |
22:34:42 | alexandrus | concurrency with race conditions xD |
22:34:57 | alexandrus | one process sabotages the other xD |
22:35:03 | EXetoC | rolf |
22:35:13 | BitPuffin | hörru rolf |
22:35:15 | BitPuffin | slutaa |
22:35:17 | BitPuffin | sluta rolf |
22:35:21 | Araq | for me it means safety, no deadlocks no race conditions, no live locks |
22:35:32 | alexandrus | live locks? |
22:35:51 | alexandrus | like dead ends in a petri net? |
22:36:30 | BitPuffin | Araq: so what's the competitive about, can you even do that stuff without immutability stuff |
22:38:07 | BitPuffin | stuff. |
22:40:02 | EXetoC | stuff! STUFF!! |
22:40:24 | EXetoC | I need more stuff. where can I get some money? |
22:42:58 | Araq | immutability means you can read from a data structure safely without any synchronization |
22:43:22 | Araq | so it's hard to argue it's pointless for concurrency |
22:43:25 | BitPuffin | yeah exactly |
22:43:41 | BitPuffin | but how does your model solve the problems |
22:43:47 | alexandrus | persistence |
22:44:23 | Araq | the problem with immutability is that its time dependent |
22:44:24 | BitPuffin | or rather, whet exactly is the model |
22:44:36 | Araq | and type systems suck at modelling time |
22:44:49 | alexandrus | could you elaborate that a bit? |
22:45:09 | alexandrus | the time dependence here |
22:46:02 | * | zahary__ joined #nimrod |
22:47:25 | Araq | well for a start you need to construct the data structure |
22:47:51 | alexandrus | that so far is understood |
22:48:04 | Araq | and if it's not a tree, it's hard to construct it and keep it immutable during construction |
22:48:07 | alexandrus | so there is a time interval, where the data is created, at some point it is complete |
22:48:37 | * | zahary_ quit (Ping timeout: 248 seconds) |
22:48:47 | alexandrus | i guess i would follow Chris Okasaki on that (Purely Functional Data Structures) |
22:48:59 | alexandrus | where each construction step creates a new persistent data structure |
22:49:05 | Sergio965 | Immutability is in no sense time dependent. |
22:49:11 | Sergio965 | If it is, it's not immutability. |
22:49:13 | alexandrus | with a lot of sharing |
22:52:10 | BitPuffin | yeah I'm not sure I see why it's time dependent |
22:52:19 | BitPuffin | it can't be used until it's constructed sure |
22:52:28 | BitPuffin | but what's that got to do with concurrency |
22:53:08 | BitPuffin | if that was a problem then it would be a problem on one thread too |
22:54:02 | alexandrus | lets say, we have some data structure over an index I |
22:55:02 | alexandrus | we could run some parallel algorithm on it, but we first have to distribute the necessary parts of the data structure to the nodes |
22:55:16 | alexandrus | this step could benefit from persistence |
22:55:34 | alexandrus | since the data structure won't break during that |
22:55:55 | Sergio965 | What does concurrency have to do with persistence? |
22:57:02 | BitPuffin | well not necessarily anything, but concurrency benefits from persistence |
22:57:12 | alexandrus | its just my 5 cents |
22:57:28 | alexandrus | that the process of distributing data to a concurrent nodes |
22:57:31 | alexandrus | may benefit here |
22:57:46 | alexandrus | maybe for real world scenarious one needs a kind of snapshot mechanism |
22:58:26 | alexandrus | like a distributed transaction, if we add the reverse |
22:58:58 | Sergio965 | How does concurrency benefit from persistence? I don't understand how concurrency and persistence conflate in a positive way. |
22:59:03 | alexandrus | but what do you think about that, Araq? |
22:59:27 | alexandrus | maybe the wording is not right? |
22:59:45 | alexandrus | i took persistence from a book |
23:00:01 | alexandrus | i think its used differently in other contexts |
23:00:29 | alexandrus | just, create the data once, cannot modify it ever again |
23:00:41 | Sergio965 | Persistence = to persist = write to disk (usually). |
23:00:45 | alexandrus | immutable, ~ java final |
23:00:57 | Sergio965 | Java's final doesn't imply immutability all the time. |
23:01:24 | alexandrus | ok...won't argue that, i am a java newbe |
23:01:26 | Sergio965 | Or even half the time. |
23:01:32 | Sergio965 | Just rarely. |
23:01:39 | alexandrus | my languages are Ada, C++ |
23:01:50 | Sergio965 | Immutability is like doing const in C++. |
23:02:05 | alexandrus | if there is no alias and no const-cast involved |
23:02:16 | Sergio965 | Well, no. |
23:02:38 | Araq | alexandrus: since you all know better than me, I'm quiet now |
23:02:48 | alexandrus | Araq?.... |
23:03:03 | alexandrus | i wasn't about to quit you down at all... |
23:03:12 | alexandrus | please, stay with us:-) |
23:03:23 | alexandrus | don't be thrown of by my 5 cents |
23:03:37 | Sergio965 | Araq: I challenged your assertion that immutability is time-dependent. I don't presume to know more than you. Simply, it seems to me, by definition, immutability has nothing to do with time. If an object is said to be immutable, it cannot be changed. |
23:04:18 | Araq | break construction apart into multiple steps and nothing can be immutable |
23:04:49 | Sergio965 | What you're describing has all to do with atomicity and nothing to do with immutability. |
23:05:16 | Sergio965 | The "atoms" are said to be immutability, a structure made of multiple atoms cannot, unless made atomic, be inherently immutable. |
23:05:20 | Sergio965 | immutable* |
23:05:27 | alexandrus | Araq, are we speaking of functional construction or imperative construction? |
23:05:35 | BitPuffin | immuntability is like doing let in nimrod :D |
23:05:45 | alexandrus | atoms are mutable, says the physicist xD |
23:06:14 | alexandrus | i have to take the time to learn nimrod first, currently perparing for a job interview...not so easy to do everything at the same time |
23:06:23 | alexandrus | (and prepare a talk about multi threading next week too) |
23:06:47 | alexandrus | mercy:-) |
23:14:29 | Araq | Sergio965: that's way to philosophical |
23:15:00 | Araq | you construct a data structure, then it doesn't change anymore and so you can read without synchronization |
23:15:30 | Sergio965 | Araq: As long as the construction occurs atomically, then the entire data structure can be said to be immutable. |
23:15:32 | alexandrus | but where is the time dependence "problem" |
23:16:11 | EXetoC | time? sequence? |
23:17:29 | Sergio965 | What is "time"? Where is "forward"? What is? |
23:18:07 | alexandrus | we don't want to go into physics here, do we? xD |
23:18:08 | Araq | Sergio965: the construction often can't occur atomically |
23:18:51 | Araq | for example, try to construct a graph with a cycle in a functional language |
23:19:02 | alexandrus | Araq, just wondering, if the construction is not atomically, can it be separated into construction of intermediate steps, where each is created atomically in your language? |
23:19:51 | alexandrus | i guess, thats only possible as I -> Node , IxI -> Bool |
23:19:55 | Sergio965 | Araq: Self-reference doesn't imply that atomicity is impossible. (In Haskell, for example, with lazy loading, I can do x = 1 : x) |
23:20:12 | Sergio965 | Now, I have an infinite, immutable cycle. |
23:20:39 | Araq | I know but laziness is cheating :P |
23:20:48 | alexandrus | ähm, the term self reference is a bit....vague to me here |
23:21:18 | alexandrus | pointer self reference vs recursion |
23:21:24 | Sergio965 | alexandrus: It just means that the structure somehow refers to itself. Like a linked list that starts with some element X and somewhere in the middle, goes back to that element. |
23:21:49 | Sergio965 | alexandrus: I would argue that recursion is inherently self-referential, but in a different sense. I was speaking purely about data structures. |
23:21:49 | alexandrus | ok, so pointer self reference, but thats not x=1:x, which is just recursive |
23:22:09 | Sergio965 | What do you mean? Of course it is. |
23:22:34 | Sergio965 | If you iterate through x, you won't do it recursively, will you? |
23:22:36 | alexandrus | any pointers in here?...where? |
23:22:44 | Sergio965 | Implicitly. That was Haskell. |
23:22:51 | alexandrus | i know haskell |
23:22:59 | Sergio965 | Okay... |
23:23:04 | alexandrus | ok, thats implementation detail |
23:23:11 | alexandrus | internal...there are lots of pointers of course |
23:23:23 | Sergio965 | The point is if you iterate through x, you'll get an infinite number of 1s. |
23:23:26 | Sergio965 | Do you agree with that? |
23:23:41 | alexandrus | sure, its an infinite list of 1:(1:(1:(1:....)))) |
23:24:35 | Sergio965 | Yeah, and it's infinite because it refers back to itself. |
23:24:37 | alexandrus | ok, lets drop semantic stuff, for the abstraction haskell provides, thats not based on pointers |
23:24:43 | alexandrus | even if internally, its implemented like that |
23:24:46 | Sergio965 | :\ |
23:24:56 | Sergio965 | You don't need the concept of pointers to be self-referential. |
23:25:27 | alexandrus | :-) |
23:26:11 | alexandrus | ok, so now lets get to graphs with cycles in them... |
23:26:30 | Sergio965 | Anyway, my point was that a self-referring data-structure can be created atomically. And, if all of its elements are immutable, the data-structure is immutable. Because of atomicity, there is no time-dependence.. |
23:26:45 | alexandrus | i think recursive definition like with trees is not possible here |
23:27:09 | Sergio965 | alexandrus: x = 1 : x is a graph with a cycle in it. Specifically, it's a graph with one node and one edge that goes from that node to itself. |
23:27:12 | Araq | Sergio965: you don't need haskell for that, you can subsitute pointers by indexes to get the same |
23:27:32 | Sergio965 | Araq: Agreed. It was just the easiest and most illustrative example I could think of. |
23:30:49 | alexandrus | ok, next question... |
23:30:54 | alexandrus | three nodes |
23:30:55 | alexandrus | 1,2,3 |
23:31:27 | alexandrus | now, we have a directed edges (1,2), (1,3), (2,1), (3,1) |
23:31:34 | EXetoC | one went to the bathroom |
23:31:43 | Sergio965 | Lmao. |
23:33:37 | Sergio965 | alexandrus: Uh huh... |
23:33:53 | alexandrus | yes, i am trying a solution, but i don't get it quite right in your terms |
23:33:59 | alexandrus | two cycles |
23:34:11 | alexandrus | a=1:2:a, b=1:3:b |
23:34:14 | alexandrus | but how to combine them |
23:34:21 | Sergio965 | Araq: What is exactly does the effect system let me do? I think I understand the gist of it from the manual (track exceptions, tag them, contain them, etc), but how would I use it in practice? |
23:34:24 | alexandrus | its a valid graph, thats obvious |
23:35:27 | Sergio965 | alexandrus: What are you trying to do? |
23:35:31 | alexandrus | x=1:2:1:(2,3) there is some kind of choice in here...which defies the style you used |
23:35:41 | alexandrus | i don't get that graph in there |
23:35:53 | alexandrus | trying to get beyond the x=1:x |
23:36:28 | Araq | Sergio965: there is still the ~ missing in the language unfortunately; proc main() {.tags: [~FDb].} # compiler ensures main does no database access |
23:36:42 | Sergio965 | Araq: That's neat. |
23:36:56 | Sergio965 | Araq: Statically, I assume? |
23:37:12 | Araq | yep |
23:37:39 | Sergio965 | Do I have to supply the .raises: [ blah ] pragma to all the function that can possible raise FDb or is it inferred? |
23:37:55 | Araq | FDB is no exception here, it's a tag |
23:38:02 | Sergio965 | Oh I see. |
23:38:25 | Sergio965 | In that case, do I have to tag all the procs with FDb? |
23:38:33 | Araq | it's all inferred as far as possible |
23:38:59 | Araq | and the stdlib already uses the FDb tag |
23:40:24 | Sergio965 | I guess what I'm wondering is how you get through something like this: proc sometimes_db(yes: bool) = if (yes) db_access() else something_else();. Will a proc that call sometimes_db(complicate_false()) with the tag ~FDb be allowed? |
23:40:46 | Araq | no the tracking is flow insensitive |
23:40:54 | Sergio965 | Ah I see. |
23:41:08 | Sergio965 | That would have been the question to ask. Ha. |
23:42:10 | Araq | nowadays we got a flow analyzer but I still think it's a bad idea to make it flow sensitive |
23:45:47 | BitPuffin | oh that reminds me, Araq, is there any lazyness in nimrod? |
23:46:20 | Araq | BitPuffin: we have 'iterator' and first class functions |
23:46:52 | Araq | also the macro system enables some form of laziness |
23:47:50 | alexandrus | <- admits, that he has a issues again... |
23:48:06 | alexandrus | wasn't lazyness not: evaluate only when needed? |
23:49:04 | BitPuffin | Araq: hmm, how does macro system enable that? |
23:50:03 | Araq | BitPuffin: template iff(a, b, c: expr): expr = if a: b else: c |
23:50:29 | Araq | put it into a proc instead and see the laziness go out of the window |
23:51:43 | Araq | ok "enable" is perhaps a bit strong but it at least it doesn't lose laziness |
23:51:52 | alexandrus | ok... |
23:51:58 | alexandrus | have a good night:-) |
23:52:02 | * | alexandrus quit () |
23:53:54 | BitPuffin | hmm |