00:11:54 | EXetoC | reactormonk: tracking dictates what exceptions may be raised. I assume you're referring to that. have you read that section of the manual? |
00:15:20 | * | darkf joined #nimrod |
00:22:29 | * | shodan45 quit (Ping timeout: 264 seconds) |
00:23:59 | * | shodan45 joined #nimrod |
00:24:02 | * | shodan45 quit (Client Quit) |
00:31:51 | * | Joe_knock quit (Quit: Leaving) |
01:17:29 | * | dom96_and quit (Quit: Bye) |
01:28:06 | * | hoverbear joined #nimrod |
01:28:10 | * | hoverbear quit (Max SendQ exceeded) |
01:28:42 | * | hoverbear joined #nimrod |
01:40:15 | * | ehaliewicz quit (Remote host closed the connection) |
01:40:44 | * | ehaliewicz joined #nimrod |
01:41:08 | * | q66 quit (Quit: Leaving) |
01:59:52 | * | Varriount joined #nimrod |
02:00:56 | * | Vclone quit (Ping timeout: 240 seconds) |
02:16:12 | * | saml_ joined #nimrod |
03:13:16 | * | def- joined #nimrod |
03:16:26 | * | def-_ quit (Ping timeout: 252 seconds) |
03:21:34 | * | dLog_ is now known as dLog |
04:36:57 | * | flaviu1 quit (Ping timeout: 240 seconds) |
04:51:20 | * | saml_ quit (Ping timeout: 240 seconds) |
05:07:59 | * | bjz joined #nimrod |
06:09:59 | * | shodan45 joined #nimrod |
06:26:55 | * | hoverbear quit () |
06:36:17 | * | shodan45 quit (Ping timeout: 264 seconds) |
06:58:32 | * | BitPuffin quit (Ping timeout: 245 seconds) |
07:55:37 | * | shevy quit (Ping timeout: 245 seconds) |
07:56:10 | * | shevy joined #nimrod |
08:34:25 | * | kunev joined #nimrod |
08:57:46 | * | zahary joined #nimrod |
08:58:36 | * | dymk quit (*.net *.split) |
08:58:38 | * | mal`` quit (*.net *.split) |
08:58:43 | * | Amrykid quit (*.net *.split) |
08:58:44 | * | untitaker quit (*.net *.split) |
08:58:46 | * | Roin quit (*.net *.split) |
08:58:48 | * | noam quit (*.net *.split) |
08:58:49 | * | Xuerian quit (*.net *.split) |
08:58:59 | * | dymk joined #nimrod |
08:58:59 | * | noam joined #nimrod |
08:58:59 | * | untitaker joined #nimrod |
08:58:59 | * | mal`` joined #nimrod |
08:58:59 | * | Xuerian joined #nimrod |
08:58:59 | * | Roin joined #nimrod |
08:58:59 | * | Amrykid joined #nimrod |
09:00:20 | * | freezerburnv joined #nimrod |
09:00:20 | * | freezerburnv quit (Client Quit) |
09:01:11 | * | Changaco quit (*.net *.split) |
09:01:14 | * | krusipo quit (*.net *.split) |
09:01:15 | * | fowl quit (*.net *.split) |
09:01:17 | * | reloc0 quit (*.net *.split) |
09:01:19 | * | betawaffle quit (*.net *.split) |
09:01:20 | * | Klaufir quit (*.net *.split) |
09:01:41 | * | Changaco joined #nimrod |
09:01:41 | * | Klaufir joined #nimrod |
09:01:41 | * | krusipo joined #nimrod |
09:01:41 | * | fowl joined #nimrod |
09:01:41 | * | reloc0 joined #nimrod |
09:01:41 | * | betawaffle joined #nimrod |
09:02:02 | * | noam quit (*.net *.split) |
09:02:03 | * | Xuerian quit (*.net *.split) |
09:03:02 | * | brihat joined #nimrod |
09:03:10 | * | Matthias247 joined #nimrod |
09:04:25 | * | untitaker quit (*.net *.split) |
09:04:27 | * | Roin quit (*.net *.split) |
09:04:35 | * | noam joined #nimrod |
09:04:35 | * | Xuerian joined #nimrod |
09:09:50 | * | untitaker joined #nimrod |
09:09:50 | * | Roin joined #nimrod |
10:11:06 | * | kemet joined #nimrod |
11:09:22 | * | ehaliewicz quit (Remote host closed the connection) |
11:16:01 | * | kemet quit (Quit: Instantbird 1.5 -- http://www.instantbird.com) |
11:18:57 | * | Matthias247 quit (Read error: Connection reset by peer) |
11:35:36 | * | q66 joined #nimrod |
11:35:36 | * | q66 quit (Changing host) |
11:35:36 | * | q66 joined #nimrod |
12:16:17 | * | untitaker quit (Ping timeout: 255 seconds) |
12:22:49 | * | untitaker joined #nimrod |
12:43:19 | * | darkf quit (Quit: Leaving) |
13:18:51 | * | flaviu1 joined #nimrod |
13:52:44 | * | kunev quit (Ping timeout: 276 seconds) |
14:10:18 | * | vendethiel quit (Ping timeout: 240 seconds) |
14:26:29 | flaviu1 | Araq: It appears you're right about option performance. Without any boxing fib(50) takes 48.71 seconds on my machine, and with moxing it takes over 3.30 miniutes |
14:27:30 | flaviu1 | 243.32 seconds |
14:48:00 | * | ndrei joined #nimrod |
14:56:47 | * | vendethiel joined #nimrod |
15:22:16 | * | hoverbear joined #nimrod |
15:33:28 | * | vendethiel quit (Ping timeout: 240 seconds) |
15:50:48 | Araq | flaviu1: now write a blog post about Rust's way to do error handling is overly expensive and wrong for systems programming |
15:50:58 | Araq | *about how |
15:51:20 | flaviu1 | I should do benchmarks in rust then |
15:57:57 | * | BitPuffin joined #nimrod |
15:58:59 | EXetoC | BitPuffin: morning |
15:59:53 | BitPuffin | oi EXetoC |
15:59:56 | flaviu1 | I wonder if Option could be optimized down to exceptions |
16:01:28 | Araq | flaviu1: extremely difficult and has never been done afaik |
16:01:30 | BitPuffin | that would probably not be a good thing |
16:01:34 | EXetoC | the optimization being very low level I assume |
16:01:36 | flaviu1 | Wow, rust is slow. With O3, I get 121.42 seconds for fib(50), compared with 48.71 seconds in nimrod |
16:01:44 | BitPuffin | oh yeah |
16:01:50 | BitPuffin | fib is totally a great benchmark |
16:01:52 | EXetoC | using some base pointer or whatever it is you do |
16:01:52 | BitPuffin | :P |
16:02:10 | Araq | BitPuffin: it's actually a good benchmark for what flaviu1 is doing |
16:02:19 | flaviu1 | BitPuffin: My goal is to benchmark return times, and fib works great for that |
16:02:30 | EXetoC | the fibonacci sequence is an essential part of the internets |
16:02:42 | flaviu1 | It does practically no actual work, spending all its time in returning |
16:02:44 | BitPuffin | using exceptions would be stupid for exceptions. You should get exceptions in the cases where you pretty much always expect something to work but it might not (ie reading a file etc) |
16:02:48 | * | Jesin quit (Quit: Leaving) |
16:02:51 | * | vendethiel joined #nimrod |
16:02:54 | BitPuffin | with Option you don't expect something to always be there |
16:03:18 | flaviu1 | BitPuffin: maybe I can benchmark the break-even point |
16:03:26 | BitPuffin | flaviu1: ah, well then by all means. Guess I've missed out on backlog |
16:03:44 | BitPuffin | I just thought "Rust is slow, because fib is slow" sounded really naive xD |
16:04:35 | BitPuffin | fib does some work |
16:04:40 | BitPuffin | it uses an if |
16:04:45 | BitPuffin | and subtraction |
16:04:51 | BitPuffin | and products |
16:06:41 | flaviu1 | So a couple instructions? |
16:06:57 | flaviu1 | With easy to predict branching? |
16:07:17 | bstrie | Araq: lots of us agree that unwinding is too slow to be used for systems programming :) gonna be a battle though... |
16:07:57 | BitPuffin | never said it was intensive work |
16:08:01 | BitPuffin | but it is work |
16:08:48 | EXetoC | gotta recover from those fatal errors in at most 3ns :> |
16:09:02 | bstrie | flaviu1: I presume this is a recursive implementation? |
16:09:16 | flaviu1 | I'm benchmarking return times, so yes |
16:09:40 | flaviu1 | if n <= 1: return n; else: return fib(n-1) + fib(n-2); |
16:10:43 | bstrie | flaviu1: cool. I'll be interested to see how guaranteed TCE changes that performance :) |
16:10:50 | bstrie | not gonna be anytime soon though |
16:11:06 | flaviu1 | guaranteed TCE? |
16:11:26 | flaviu1 | Tail call elimination, ok |
16:12:18 | bstrie | https://github.com/rust-lang/rfcs/pull/81 |
16:24:52 | flaviu1 | Wow, option in rust is very well optimized |
16:25:45 | flaviu1 | 129.67 seconds |
16:26:32 | flaviu1 | Compared to 121.42 seconds without option, and 243.32 seconds with option in nimrod |
16:27:13 | flaviu1 | Ah, they use compiler magic |
16:29:07 | * | Skrylar quit () |
16:31:05 | BitPuffin | lol |
16:31:35 | EXetoC | lololo |
16:31:38 | flaviu1 | I want to see how they do it, but it doesn't appear to be documented |
16:32:38 | bstrie | flaviu1: what's the exact type of the return value? is it Option<int> or Option<Box<int>>? |
16:32:57 | flaviu1 | In rust? Option<i64> |
16:33:02 | bstrie | hm |
16:33:24 | bstrie | I ask because, in rust, Option<Box<int>> gets optimized to a nullable pointer at runtime |
16:33:39 | dom96 | bstrie: Does Nimrod ever get mentioned in the #rust channel on mozilla's IRC server? |
16:33:47 | bstrie | it's not a special-case for Option or anything, we leave enum representation deliberately unspecified for stuff like this |
16:33:52 | fowl | bstrie, are pointers normally not nullable |
16:34:15 | bstrie | fowl: no, Rust doesn't have a concept of null, unless you're using "unsafe" blocks |
16:34:24 | fowl | ah |
16:34:31 | bstrie | that's what Option is for, to represent null things in a type-safe way |
16:34:47 | flaviu1 | IMO, the whole idea of not having null is a great idea |
16:35:05 | BitPuffin | does rust also call it Some? |
16:35:17 | flaviu1 | BitPuffin: Yes |
16:35:24 | bstrie | flaviu1: but there's no way to do that same optimization for Option<i64>. I'm not aware of any tricks that we're doing to it. LLVM might be |
16:35:36 | EXetoC | some is one of the variants |
16:35:45 | flaviu1 | https://gist.github.com/259138f5e13e15feae8a is my rust code |
16:35:55 | EXetoC | http://static.rust-lang.org/doc/0.10/std/option/index.html#options-and-pointers-%28%22nullable%22-pointers%29 |
16:36:14 | bstrie | there are lots of people who *want* us to be able to optimize Option<i64> by letting them declare a certain value in that range as "impossible" so that the compiler can elide the tag |
16:36:20 | bstrie | but no proposals have been accepted yet |
16:36:42 | fowl | not worth it imo |
16:37:00 | flaviu1 | fowl: Actually, it is sort of. |
16:37:10 | fowl | how could you pick the unlucky number that cant be used then, and how do you work around it in loops and such |
16:37:22 | flaviu1 | low(int64) = -9223372036854775808, high(int64) = 9223372036854775807 |
16:37:30 | flaviu1 | I don't like that asymmetry. |
16:38:00 | flaviu1 | although the implementation might be slower... |
16:38:31 | bstrie | one way or the other you have to do a runtime check, it's just automatic in rust rather than being manual in C (via unions) |
16:39:11 | fowl | my maybe[t] is tuple[has:bool, val:T] |
16:39:28 | flaviu1 | Mine is essentially the same |
16:39:55 | fowl | this way you can unpack it |
16:40:09 | fowl | if (let (has,val) = just(42); has): echo val |
16:41:00 | flaviu1 | That's clever, rudimentary pattern matching :D |
16:41:25 | fowl | yea |
16:41:46 | fowl | i figure its not worth a variant object, that builds extra checking whenever you access the value |
16:42:08 | fowl | plus sometimes i misuse it >_> |
16:42:10 | EXetoC | in release mode too? |
16:42:20 | fowl | EXetoC, i believe so |
16:43:16 | fowl | EXetoC, nope |
16:44:13 | * | io2 joined #nimrod |
16:52:03 | Araq | bstrie: no, with proper exceptions there is no runtime check whatsoever, that is the point |
16:53:16 | flaviu1 | doesn't setjmp take a while though? |
16:53:32 | Araq | flaviu1: firstly setjmp is not the most efficient way to do it |
16:53:46 | Araq | and secondly setjmp is only called once *before* fib |
16:53:57 | Araq | not in every recursive call |
16:54:05 | Araq | IF you do it right, of course |
16:54:32 | flaviu1 | Well, I could place the try catch inside the fib function, that would slow things down |
16:54:43 | flaviu1 | But what would be the most efficient way to do it? |
16:56:15 | bstrie | Araq: I would like to hear more about proper exceptions. and in rust's case, the problem isn't runtime checks. the problem is that generating landing pads doubles compile time, and causes LLVM to miss out on optimizations |
16:56:46 | * | Matthias247 joined #nimrod |
16:57:37 | bstrie | so even though it's "zero overhead", it's a bit misleading as abandoning unwinding entirely would actually yield negative overhead :P |
16:59:04 | flaviu1 | bstrie: Does rust do optimizations on Either too? |
16:59:07 | Araq | bstrie: proper exceptions here mean either table based aka "zero overhead" or setjmp based. Both are faster than having an 'if' after every single call |
16:59:34 | Araq | the prediction buffers in your CPU are not an endless resource |
17:00:05 | Araq | for micro benchmarks you won't ever measure this as a problem |
17:00:10 | bstrie | Araq: table baed is what we do, I believe |
17:00:33 | bstrie | the aforementioned "landing pads", though I'm unsure if I'm using their bespoke terminology properly |
17:01:28 | * | vendethiel quit (Ping timeout: 260 seconds) |
17:02:44 | Araq | also even if the Option/if based solution is faster a compiler can easily compile exceptions into Option on these architectures. The other way round is MUCH more difficult |
17:03:21 | Araq | that's the power of abstraction, you told the compiler what you need, it can choose between lots of options... |
17:04:25 | EXetoC | dom96: do IRC responses always contain at most one set of repetitions? the params field is a sequence of strings so that's why I'm asking |
17:04:33 | bstrie | Araq: the original impetus for leaving out exceptions was not performance, it was correctness. exception-safety sucks. languages where such things are not priority #1 have more leeway here |
17:05:45 | bstrie | i.e. the decision was not "exceptions are slow, let's leave them out", it was "exceptions make correctness impossible to reason about, what are our other alternatives" |
17:05:52 | flaviu1 | I don't think the other way around would be much more difficult, I think they'd be about equal difficulty. |
17:06:16 | EXetoC | dom96: chances are you got it right, but I thought I'd ask anyway |
17:06:19 | dom96 | EXetoC: repetitions? i'm not sure what you mean |
17:07:23 | bstrie | that said, exceptions are hardly free. "-fno-exceptions" exists for a reason (unless you're trying to say that GCC and clang implement exceptions the wrong way?) |
17:08:02 | EXetoC | dom96: some parameters come with an unspecified amount of values |
17:09:02 | EXetoC | lists basically |
17:10:09 | fowl | EXetoC params is the message split on ' ' |
17:10:41 | dom96 | It's everything after the command split on ' ' |
17:10:50 | dom96 | with the exception of stuff after ':' |
17:11:31 | Araq | bstrie: I'm saying they are cheaper than 'ifs' all over the place |
17:11:58 | Araq | and -fnoexceptions is faster but then that doesn't use ifs either |
17:12:08 | Araq | when you use that, you basically exit/quit on error |
17:12:27 | Araq | which is often fine and the most performant solution |
17:14:08 | Araq | and with nimrod's exception tracking exceptions are quite easy to reason about; IMO anyway |
17:14:15 | Araq | bbl |
17:19:51 | EXetoC | dom96: yes, and params always ends up having the same length for a certain type of reply, right? |
17:20:15 | fowl | EXetoC, if you want to know about the protocol read the RFC |
17:20:46 | fowl | EXetoC, thats a valid assumption |
17:21:00 | dom96 | EXetoC: The position of each param is always the same. |
17:21:19 | dom96 | only differing between commands |
17:21:22 | dom96 | but yeah, do read the RFC. |
17:22:15 | EXetoC | that implies a fixed length for individual commands then |
17:22:26 | EXetoC | I am reading it. it's a little hard to manage without it |
17:23:17 | dom96 | yeah, the IRC module is a bit low level. |
17:25:37 | fowl | servers today seem to run a mix of the first and second irc rfc |
17:25:50 | EXetoC | do you think the event variant is fine? I suppose you could make it really verbose, but it might be best to extend it with a set of procs instead |
17:26:18 | EXetoC | and maybe a high level type too, for accumulating lists until the end reply has been received |
17:27:33 | EXetoC | anyway, enough about that for now. I think we still have more important things to worry about |
17:29:07 | fowl | EXetoC, you mean like make it have a vtable that it dispatches messages to? |
17:32:27 | EXetoC | hm vtable |
17:34:43 | EXetoC | no it's just that some messages might come in chunks, and the accumulation of those chunks could be hidden from users |
17:37:30 | * | Jesin joined #nimrod |
17:39:54 | EXetoC | ok I misread, so I don't know if that's ever the case |
17:49:39 | * | kunev joined #nimrod |
17:58:08 | * | vendethiel joined #nimrod |
18:04:03 | Araq | flaviu1: " I don't think the other way around would be much more difficult, I think they'd be about equal difficulty." -- This is false. One is simply a code generation strategy (replace exceptions with ifs), the other an optimization problem that requires a sophisticated inter procedural analysis. |
18:13:13 | * | BitPuffin quit (Ping timeout: 252 seconds) |
18:15:12 | * | BitPuffin joined #nimrod |
18:16:47 | * | shodan45 joined #nimrod |
18:20:41 | fowl | EXetoC, no messages should come in chunks |
18:21:02 | fowl | well, connecting is different |
18:22:52 | EXetoC | I misinterpreted MSG_NAMREPLY |
18:23:48 | reactormonk | Araq, any comments on #1130 |
18:28:51 | Araq | wait a sec |
18:53:18 | * | kunev quit (Ping timeout: 240 seconds) |
18:55:58 | Araq | reactormonk: how can breaking be anything except the last character? |
18:56:06 | Araq | *position? |
18:57:48 | EXetoC | where are the fixes to that proc? |
19:00:56 | EXetoC | unless you fixed it in another way than what I suggested |
19:04:01 | Araq | EXetoC: whom are you talking to? |
19:05:43 | EXetoC | Araq: reactormonk. breaking+1 -> breaking, breaking+2 -> breaking+1 |
19:06:45 | Araq | reactormonk: 1e30 is fine and doesn't need to become 1.0e30. so the whole 'moveMem' stuff is unnecessary, just append .0 if it can mean an integer otherwise |
19:11:34 | EXetoC | I don't mind being extra clear, but ok |
19:18:52 | Varriount | Good evening everyone! |
19:21:53 | * | Simn joined #nimrod |
19:22:04 | Araq | hi Simn welcome |
19:22:05 | shevy | Good beer to you too |
19:22:10 | Araq | hi Varriount |
19:22:17 | Simn | Hello Araq, thank you. :) |
19:22:43 | Varriount | Sorry I haven't been on much the past week. I've been busy with work-stuff, and finishing a really good game. |
19:23:37 | shevy | Tetris? |
19:24:31 | fowl | farmville |
19:24:40 | Varriount | Transistor |
19:28:54 | shevy | isn't that an electronic thingy? |
19:29:19 | Varriount | shevy: http://supergiantgames.com/index.php/transistor/ |
19:29:59 | Simn | Araq, I'm currently investigating GCs of languages that compile to C. What made you chose reference counting over a tracing solution? |
19:30:02 | shevy | cool |
19:38:50 | * | Jehan_ joined #nimrod |
19:49:46 | reactormonk | Araq, ruby does that though |
19:55:32 | * | Mat3 joined #nimrod |
19:55:38 | Mat3 | Good Day |
20:04:11 | shodan45 | hello #nimrod |
20:04:38 | Jehan_ | Simn: I'm not Araq, but if I had to guess, it's one or more of the following things (also note that it's not naive reference counting, but deferred reference counting). |
20:04:57 | shodan45 | heh, what's everyone's take on apple's new "swift" language? |
20:06:50 | Jehan_ | (1) Predictable pause times as long as your data doesn't have cycles, (2) no need to register global roots, (3) short-lived objects have very little overhead (like generational GC). |
20:06:59 | flaviu1 | shodan45: So far, I don't think anyone here has really gotten excited over it |
20:08:53 | Jehan_ | shodan45: Some interesting ideas, but as long as it's limited to Apple platforms, no practical value for me (and I'm using Macs for anything but server stuff). |
20:10:03 | Mat3 | shodan45: I take a look at it at current |
20:10:32 | Jehan_ | It seems to be primarily targeted at iOS app developers who don't want to deal with Objective C. |
20:10:41 | Simn | Jehan_, thank you. |
20:13:36 | Araq | Simn: in addition to Jehan_'s points: |
20:14:02 | Araq | - algorithm is not dependent on the amount of live data in the heap |
20:14:21 | Araq | - algorithm is quite close to what is done manually in systems programming languages |
20:17:36 | Simn | Araq, I see, thank you. |
20:17:42 | Mat3 | a language based on SSA and dataflow analysis... interesting |
20:18:11 | Simn | We somehow discarded reference counting for our project from the start and swift reminded me that it exists. |
20:18:57 | Jehan_ | Simn: A lot of the benefits depend on how common cycles are, unfortunately. But the presence of cycles is something that's generally under the control of the programmer. |
20:19:37 | * | Johz joined #nimrod |
20:19:48 | Jehan_ | Not that they can't sneak in under the radar; callbacks implemented via closures are a common example. |
20:20:22 | Araq | Jehan_: cycles are a solved problem though. not in the current collector for now, but it's not hard to make the cycle collector incremental as well |
20:21:26 | Jehan_ | Araq: I know (implemented cycle collection myself a long time ago), but any cycle collection can potentially involve the entire heap. |
20:22:21 | Jehan_ | It's also why DRC is generally better for statically typed languages, since you can prune the traversal more aggressively. |
20:24:52 | Jehan_ | It's also useful for other things. I used RC + cycle detection to deal with nested/dependent mutexes once. |
20:25:26 | Varriount | Araq: What is needed to fix bug #1090? (https://github.com/Araq/Nimrod/issues/1090) |
20:26:18 | Araq | Varriount: isn't that fixed already? |
20:26:25 | Varriount | Araq: My first instinct is to add a check for non-concrete types in the semantic checking procedure for type definitions |
20:26:49 | Varriount | Araq: The bug says 'open' |
20:27:19 | Araq | Varriount: that doesn't mean much |
20:27:44 | Varriount | Araq: What do you mean? The bug tracker wouldn't lie, would it? :3 |
20:29:16 | Araq | Varriount: that means nimrod is constantly evolving and sometimes bugs get fixed as a side effect |
20:29:31 | Varriount | And I just checked, and the bug still appears for the given code example. |
20:30:20 | Araq | well ok |
20:30:22 | Araq | here is the fix: |
20:30:37 | Araq | make computeSizeAux return -2 or something |
20:33:39 | * | reactormonk quit (Ping timeout: 252 seconds) |
20:35:02 | fowl | "Cause of crash is the lack of generic param in TThread. Changing it to TThread[void] fixes the problem." |
20:35:22 | fowl | it would be nice if this was detected, ive done that on accident too |
20:35:45 | flaviu1 | isn't void not supposed to be used as a type? |
20:35:53 | Varriount | ^ Which is why my first instinct is to add a check for concrete types in the semantic checking for type definitions |
20:37:02 | Varriount | flaviu1: It can be used, in certain generic constructs |
20:37:29 | Varriount | For example, to define the return type of a generic procedure. |
20:38:02 | flaviu1 | I thought I had read that it shouldn't on IRC once, never mind I guess |
20:40:09 | Mat3 | is it possible in Nimrod to bound a static declaration (like a constant) to a specific, valid subrange of its type ? |
20:40:40 | fowl | example? |
20:41:41 | fowl | flaviu1, type CB [T] = proc(x: T); var x: CB[void] = somefunc; x() works |
20:43:36 | Mat3 | something like: type cAConstant = range [0..n]; const cAConstant = 0xFFFF |
21:00:00 | Araq | const cAConstant: range[0..1000_000] = 0xFFFF |
21:01:07 | Araq | Varriount: you're right but "computeSIze" is supposed to do these kind of checks too |
21:01:52 | EXetoC | a range being just like any other type |
21:04:01 | Mat3 | ok, so it is not possible, thanks |
21:05:09 | EXetoC | I don't know what that is supposed to do |
21:05:34 | EXetoC | either it's a constant or it's a type |
21:05:36 | fowl | Mat3, range is a type so, type myrange = range[0..n]; const foo: myrange = .. |
21:05:37 | Simn | Limiting the range of a _constant_ seems rather strange. |
21:05:51 | fowl | tht is true silven |
21:05:52 | fowl | Simn, * |
21:07:17 | * | hoverbear quit (Read error: Connection reset by peer) |
21:07:57 | * | hoverbear joined #nimrod |
21:08:19 | Varriount | Araq: Also, the bug only happens with definitions that contain object variants that *may* depend on a given type parameter. |
21:09:03 | Mat3 | fowl: ok. The reason for it is decoupling constant definations from its type so I can export constants and the compiler can check up against some static limit without needed type information |
21:09:49 | fowl | Mat3, all of that went over my head so..erm.. good luck |
21:10:44 | EXetoC | the constant already is typed as a range |
21:11:17 | Mat3 | I guess the native integer type ? |
21:12:04 | Varriount | Araq: Why would setting the result of computerRecSizeAux to -2 work? |
21:12:46 | EXetoC | Mat3: that should be the underlying binary representation |
21:12:53 | EXetoC | it's not limited to 'int' though |
21:13:03 | EXetoC | but it doesn't work for unsigned types yet |
21:13:06 | Araq | Varriount: that's used for "illegal recursion in type" |
21:13:27 | Araq | we should make that error message "invalid type" then or something like that |
21:13:46 | Mat3 | would be nice |
21:14:18 | Varriount | Araq: Isn't that a bit... vague? |
21:15:11 | EXetoC | the need for it should be rare though |
21:21:37 | Mat3 | indeed, but its a nice feature be able limiting global constants to ranges which are valid for a specific platform |
21:22:32 | EXetoC | I was referring to unsigned ranges |
21:22:43 | * | Matthias247 quit (Read error: Connection reset by peer) |
21:22:51 | EXetoC | anyway, the type is a range, so I don' know what the problem is |
21:23:47 | Mat3 | what's the limit of these range ? |
21:23:53 | EXetoC | it's not going to be an integer type. it might be binary equivalent to one, but the range checking will be performed appropriately |
21:24:49 | EXetoC | Mat3: the ability to have unsigned ranges pretty much |
21:25:16 | EXetoC | where the underlying type is unsigned. something like this will usually be good enough though: Natural* = range[0..high(int)] |
21:25:31 | EXetoC | the system module has this too Positive* = range[1..high(int)] |
21:26:11 | Araq | Varriount: well you can introduce -3 to mean "uninstantiated generic" ... |
21:28:34 | EXetoC | but you write low level code so I'm sure you'd like unsigned ranges at times. I don't think I got around to report this limitation |
21:30:53 | Mat3 | ok, for example: I have a buffer for compiled code which should not be lesser than 4096 bytes and greather than 16 KiB (because of some platform specific memory restrictions). Because I want to avoid bound checks my idea is to define a range for constants declaring there size (but was not sure about the correct syntax) |
21:31:08 | Mat3 | and yes, unsigned ranges |
21:32:54 | EXetoC | the whole purpose of it is to have implicit checks be performed, and it should be omitted in release mode |
21:33:14 | * | flaviu1 is now known as newnickname |
21:33:28 | * | newnickname is now known as flaviu1 |
21:33:46 | * | flaviu1 quit (Quit: Leaving.) |
21:34:39 | Mat3 | yes, but this way I need including an assertion for every declaration (or just get to less sleep currently) |
21:34:48 | Mat3 | that's not elegant |
21:35:14 | Mat3 | declaration = function which access that buffer |
21:35:29 | EXetoC | yes so use ranges for the relevant parameters |
21:35:43 | EXetoC | that would make more sense than using a range for the const, but do that too if you want |
21:36:50 | EXetoC | does this make things more clear? "proc p(x: Natural) = discard; p(-1)" |
21:37:04 | Mat3 | yes, thanks |
21:37:08 | EXetoC | this will even fail at compile time because a literal is being passed in |
21:37:44 | Mat3 | another question: Can I set the physical start address of these buffer ? |
21:38:08 | EXetoC | the keyword here is inference of course. it doesn't have to be a literal |
21:38:35 | EXetoC | what buffer? |
21:40:05 | EXetoC | you can cast some memory address to a "ptr array[range, T]" or something. I don't know if that is relevant |
21:40:05 | * | Simn quit (Read error: Connection reset by peer) |
21:40:57 | * | flaviu1 joined #nimrod |
21:42:43 | * | flaviu1 is now known as flaviu |
21:43:12 | Varriount | EXetoC: Don't forget the wonders of unchecked arrays |
21:43:42 | EXetoC | yeah. haven't got around that yet |
21:44:16 | EXetoC | I assume you mean that pragma rather than just 'pointer' :> |
21:45:03 | Mat3 | hmm, I can create an object handling buffer access to specific address ranges so yes. Anyhow much effort for a simple compiler related feature |
21:45:46 | Varriount | Mat3: cast[UncheckedArray](pointer) is a lot of work? |
21:46:16 | EXetoC | won't a little bit of data encapsulation do? |
21:48:50 | Mat3 | Varriount: I mean the physical address (the one which is visible on the address bus) and not its native (linear, but still virtual) representation |
21:49:50 | EXetoC | so hide that behind a proc that enforces this |
21:51:14 | Mat3 | yes, as written |
21:51:42 | EXetoC | but a range might do yet again |
21:56:23 | Mat3 | most assemblers support this by an statement like '.data absolute 0xBEFFE800; label rq 0x400...' or similar ones |
21:57:22 | Varriount | My have an asm pragma... |
21:57:26 | Varriount | *We |
21:58:05 | Mat3 | how about: var lvalue: array [0..0x400, uInt8] {.absolute 0xBEFFE800.} |
21:58:40 | Varriount | Does it work? |
21:59:23 | Mat3 | no (pragma does not exist) |
22:00:03 | Mat3 | anyhow, would be nice |
22:02:20 | * | brson joined #nimrod |
22:03:53 | Varriount | http://nimrod-lang.org/manual.html#assembler-statement_toc |
22:04:23 | * | zahary quit (Read error: Connection reset by peer) |
22:04:42 | * | zahary joined #nimrod |
22:07:37 | Varriount | Araq: So why is adding code to computeRecSizeAux better than adding checks to the type definition checking code? |
22:13:12 | * | JehanII joined #nimrod |
22:16:29 | * | Jehan_ quit (Ping timeout: 276 seconds) |
22:17:54 | * | JehanII quit (Client Quit) |
22:19:06 | Mat3 | ciao |
22:19:16 | * | Mat3 quit (Quit: Verlassend) |
22:19:30 | * | Varriount|Mobile joined #nimrod |
22:22:00 | * | Johz quit (Ping timeout: 265 seconds) |
22:22:54 | * | Varriount|Mobile quit (Remote host closed the connection) |
22:27:45 | * | ehaliewicz joined #nimrod |
22:33:59 | * | Varriount|Mobile joined #nimrod |
22:37:54 | * | Demos joined #nimrod |
22:54:29 | * | saml_ joined #nimrod |
23:14:22 | * | darkf joined #nimrod |
23:31:08 | * | brson quit (Quit: leaving) |
23:50:04 | * | io2 quit (Quit: ...take irc away, what are you? genius, billionaire, playboy, philanthropist) |