00:10:05 | * | Guest42307 joined #nim |
00:11:32 | FromGitter | <Varriount> @dom96: I finally got around to fixing NimLime breaking multi-selection. :D |
00:12:25 | FromGitter | <Varriount> Now I just need to find a good input combo for suggestions/lookups |
00:22:02 | * | libman joined #nim |
00:29:09 | * | Navajo joined #nim |
00:29:15 | * | Navajo left #nim (#nim) |
00:32:28 | FromGitter | <Varriount> @araq Does the compiler do any static analysis to optimize heap memory allocations? |
01:03:11 | * | Guest42307 quit (Remote host closed the connection) |
01:05:19 | * | devted quit (Quit: Sleeping.) |
01:26:11 | * | zachcarter quit (Quit: zachcarter) |
01:32:29 | FromGitter | <Varriount> @zacharycarter You might find https://modarchive.org/index.php?request=view_by_license&query=publicdomain a good place to look for video game music |
01:32:47 | FromGitter | <Varriount> Especially for anything retro-sounding. |
01:34:48 | * | yglukhov joined #nim |
01:39:29 | * | yglukhov quit (Ping timeout: 252 seconds) |
01:43:23 | * | bjz_ quit (Quit: Textual IRC Client: www.textualapp.com) |
01:45:57 | * | bjz joined #nim |
01:48:27 | * | chemist69 quit (Disconnected by services) |
01:48:33 | * | chemist69_ joined #nim |
02:02:10 | * | bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) |
02:36:00 | * | dexterk_ quit (Quit: Konversation terminated!) |
02:38:59 | * | jordo2323 joined #nim |
02:43:30 | * | bjz joined #nim |
02:43:36 | * | Nobabs27 joined #nim |
02:43:56 | * | jordo2323 quit (Quit: leaving) |
02:49:16 | * | babs_ joined #nim |
02:49:42 | * | bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) |
02:51:42 | * | Nobabs27 quit (Ping timeout: 258 seconds) |
03:10:50 | ftsf | o/ |
03:14:44 | libman | https://www.reddit.com/r/nim/comments/65lm6h/any_ideas_on_how_to_improve_performance_further/ |
03:15:19 | * | babs__ joined #nim |
03:17:46 | * | babs_ quit (Ping timeout: 258 seconds) |
03:20:33 | * | bjz joined #nim |
03:23:07 | * | bjz quit (Client Quit) |
03:23:44 | * | bjz joined #nim |
03:37:18 | * | yglukhov joined #nim |
03:41:53 | * | yglukhov quit (Ping timeout: 260 seconds) |
03:52:54 | * | dexterk quit (Quit: hAvE yOu mOOEd tOdAY) |
03:52:54 | * | bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) |
03:58:39 | * | bjz joined #nim |
04:08:12 | * | babs__ quit (Quit: Leaving) |
04:08:30 | * | babs__ joined #nim |
04:12:38 | FromGitter | <Varriount> libman: How many cores does your machine have? |
04:12:57 | libman | 4 |
04:14:22 | FromGitter | <Varriount> libman: Then that's why only 4 threads are spawned. `spawn` creates a threadpool |
04:16:00 | libman | Yeah, I think the program the reddit u/SiD3W4y asking about expects being able to start arbitrary number of threads. |
04:16:31 | FromGitter | <Varriount> https://nim-lang.org/docs/threadpool.html |
04:16:32 | libman | s/ask/is ask/ |
04:16:49 | FromGitter | <Varriount> Although, the threadpool is global for an entire program. |
04:17:26 | FromGitter | <Varriount> Part of me wonders if the API would have been better if the threadpool was an actual non-global object, but w/e |
04:33:48 | * | babs_ joined #nim |
04:36:11 | * | babs__ quit (Ping timeout: 240 seconds) |
04:37:05 | * | babs_ quit (Client Quit) |
04:37:22 | * | babs_ joined #nim |
04:37:59 | * | Snircle quit (Quit: Textual IRC Client: www.textualapp.com) |
04:39:24 | * | babs__ joined #nim |
04:41:53 | * | bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) |
04:42:08 | * | babs__ is now known as Nobabs27 |
04:42:39 | * | babs_ quit (Ping timeout: 260 seconds) |
04:44:02 | * | Nobabs27 quit (Client Quit) |
04:44:18 | * | Nobabs27 joined #nim |
05:02:12 | * | bjz joined #nim |
05:03:49 | * | Nobabs25 joined #nim |
05:06:15 | * | Nobabs27 quit (Ping timeout: 258 seconds) |
05:38:01 | * | Nobabs25 quit (Quit: Leaving) |
05:39:48 | * | yglukhov joined #nim |
05:44:12 | * | yglukhov quit (Ping timeout: 258 seconds) |
05:56:17 | * | chemist69_ quit (Ping timeout: 260 seconds) |
05:58:26 | * | chemist69 joined #nim |
06:07:57 | * | Vladar joined #nim |
06:25:39 | * | xmonader3 joined #nim |
06:36:09 | * | rokups joined #nim |
06:37:26 | * | nsf joined #nim |
06:41:02 | * | Arrrr joined #nim |
06:45:50 | * | Arrrr quit (Ping timeout: 252 seconds) |
07:16:42 | * | yglukhov joined #nim |
07:21:07 | * | yglukhov quit (Ping timeout: 240 seconds) |
07:29:54 | * | Arrrr joined #nim |
07:29:54 | * | Arrrr quit (Changing host) |
07:29:54 | * | Arrrr joined #nim |
07:31:06 | * | libman quit (Quit: Connection closed for inactivity) |
07:36:03 | * | xmonader2 joined #nim |
07:39:12 | * | xmonader3 quit (Ping timeout: 258 seconds) |
08:02:13 | * | chemist69 quit (Ping timeout: 240 seconds) |
08:07:14 | * | chemist69 joined #nim |
08:32:16 | FromGitter | <mratsim> Design question: for my multidimensional array library I want to save memory by shallow copying by default and only deep copying when there is a modification. ⏎ How to best detect if a copy is needed ? Refcounting/copy-on-write ? |
08:32:52 | ldlework | Is there a way to reflect and get the arguments of an arbitrary proc or method? |
08:33:51 | ftsf | https://gist.github.com/ftsf/133a6d6797d7bd748b9e5d00b9d2ab1b trying to make a simple object pool (free list), any idea why I can't cast a var T to ptr T in getNext ? |
08:34:09 | ldlework | heh I was just about to sit down and write an object pool |
08:41:18 | Arrrr | Try with addr(item) |
08:41:40 | ftsf | hmm but I don't want the addr of item, i'm using item to store the addr |
08:45:24 | ftsf | i wish it told me _why_ it can't be cast =p |
08:45:49 | FromGitter | <mratsim> item should be a ByteAddress no? |
08:46:47 | ftsf | item is of type T (generally an object), but is also used to store a pointer to another T (next free slot) when it's empty. |
08:47:05 | FromGitter | <mratsim> I use this snippet stolen from Jehan for pointer arithmetics if it helps: https://github.com/mratsim/Arraymancer/blob/master/src/utils/pointer_arithmetic.nim |
08:48:18 | FromGitter | <mratsim> for getNext you can just add sizeof(type p[]) |
09:02:35 | * | yglukhov joined #nim |
09:06:50 | * | yglukhov quit (Ping timeout: 255 seconds) |
09:13:11 | * | Arrrr quit (Quit: Leaving.) |
09:19:50 | * | shashlick quit (Ping timeout: 252 seconds) |
09:20:19 | * | shashlick joined #nim |
09:22:58 | * | yglukhov joined #nim |
09:31:04 | * | vivus joined #nim |
09:39:56 | ftsf | \ |
09:40:05 | ftsf | \o/ got it working ,but it's uglier than i'd hoped =( |
09:40:21 | FromGitter | <mratsim> how? |
09:40:57 | ftsf | using copyMem to setNext |
09:42:10 | ftsf | https://gist.github.com/ftsf/133a6d6797d7bd748b9e5d00b9d2ab1b |
09:43:39 | * | krux02 joined #nim |
09:47:12 | FromGitter | <mratsim> I think comments from others might be better, not famliar with object pool, I will probably hit the same issues as you on my lib though: How to iterate through non-contiguous arrays |
09:50:28 | ftsf | one thing that was odd (but understandable) was when assigning to a target larger than the source, the target had data changed (to nonzero values) in the portion higher than the source size |
09:50:37 | ftsf | hence having to use copymem |
09:54:32 | * | krux02 quit (Quit: Verlassend) |
10:07:44 | * | Sembei joined #nim |
10:10:07 | FromGitter | <gogolxdong> @dom96 where does the compiler define`#define NIM_INTBITS 32` |
10:10:35 | ftsf | /usr/include/nimbase.h ? |
10:10:56 | ftsf | nope sorry |
10:10:58 | ftsf | ignore me |
10:16:53 | * | zachcarter joined #nim |
10:17:18 | FromGitter | <gogolxdong> there is no specific target platform but the compiled c file has that macro defination |
10:18:23 | FromGitter | <gogolxdong> is it possible to build nimkernel on arm_64 with gcc and as or nasm |
10:18:34 | FromGitter | <gogolxdong> sorry,amd64 |
10:20:11 | dom96 | My guess would also be nimbase.h |
10:20:23 | dom96 | I'm guessing nimkernel would need some modifications to support amd64 |
10:24:16 | FromGitter | <gogolxdong> I looked into nimbase.h that NI is dependent to the defination of c macro |
10:28:39 | FromGitter | <gogolxdong> which defines `#define NIM_INTBITS X` |
10:30:27 | * | nhywyll joined #nim |
10:31:36 | * | nhywyll quit (Client Quit) |
10:32:16 | * | rokups quit (Quit: Connection closed for inactivity) |
10:36:09 | * | Ven joined #nim |
10:36:32 | * | Ven is now known as Guest88951 |
10:37:25 | FromGitter | <gogolxdong> execution of `nim c nakefile.nim`makes main.c in nimcache overwritten by `#define NIM_INTBITS 32` |
10:44:11 | * | bjz_ joined #nim |
10:46:39 | * | bjz quit (Ping timeout: 258 seconds) |
10:55:11 | FromGitter | <stisa> @gogolxdong It's defined in the c generation I think, here https://github.com/nim-lang/Nim/blob/7e351fc7fa96b4d560c5a51118bab22abb590585/compiler/cgen.nim#L855 |
10:58:59 | dom96 | In that case, you probably control it via the --cpu flag. For example: --cpu:amd64 |
11:01:39 | FromGitter | <gogolxdong> I will try :) |
11:02:49 | * | Snircle joined #nim |
11:12:51 | FromGitter | <zacharycarter> @Varriount thanks for that link |
11:15:21 | zachcarter | @Varriount think I just found a new track for space invaders on there :P |
11:25:35 | FromGitter | <gogolxdong> doesn't work |
11:27:02 | * | Trustable joined #nim |
11:33:27 | * | Guest88951 quit (Quit: My MacBook has gone to sleep. ZZZzzz…) |
11:37:29 | dom96 | huh, apparently Amazon has used copies of my book already. That sure is impossible: https://www.amazon.co.uk/Nim-Action-Dominik-Picheta/dp/1617293431/ref=sr_1_1?ie=UTF8&qid=1479663850&sr=8-1&keywords=nim+in+action |
11:38:34 | FromGitter | <mratsim> 88 pounds, wow :O |
11:39:03 | * | alectic joined #nim |
11:40:12 | dom96 | Either it's a scam or a mistake, in any case don't buy it :) |
11:40:25 | dom96 | (From Amazon at least for now) |
11:42:31 | alectic | Hi. I just registered yesterday on the forum but haven't received any confirmation mail at all. Is there anything to be done, like resend the mail or such (haven't seen the option anywhere)? I can't login without it being confirmed. |
11:43:28 | dom96 | alectic: hey, maybe your email provider decided it was spam. I can activate the account for you, what's your nickname? |
11:43:29 | FromGitter | <mratsim> Seems like you were hit by my curse :P. “Cast @dom96" |
11:44:15 | alectic | dom96: I use gmail I don't think that's the issue. My nickname is alexdreptu |
11:44:30 | dom96 | alectic: then perhaps nimforum is just failing :) |
11:44:47 | dom96 | Activated |
11:44:56 | alectic | dom96: I wouldn't know but I thought to mention something on IRC |
11:45:00 | alectic | dom96: thank you |
11:56:22 | FromGitter | <Bennyelg> Anyone can help? |
11:56:27 | FromGitter | <Bennyelg> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=58f4ad6e8fcce56b200d2fca] |
11:56:37 | FromGitter | <Bennyelg> What is wrong ? :( |
12:01:18 | FromGitter | <RSDuck> I think you have to do it this way: ⏎ ⏎ ```var c = new(cursor) ⏎ c.dbConnection = dbCon ⏎ return c.getCursor()``` [https://gitter.im/nim-lang/Nim?at=58f4ae908bb56c2d11b64b5a] |
12:01:50 | FromGitter | <Bennyelg> Great thanks. |
12:02:31 | FromGitter | <RSDuck> BTW I you don't plan to modify c you should make it a let variable |
12:02:50 | FromGitter | <Bennyelg> Yea I thought about it now :D |
12:02:59 | * | alectic quit (Quit: because gitter) |
12:03:41 | FromGitter | <RSDuck> for easier a construction of these ref objects I usally create a proc like this: |
12:04:34 | FromGitter | <RSDuck> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=58f4af55d32c6f2f09133c46] |
12:06:43 | * | peted quit (Quit: WeeChat 1.4) |
12:07:00 | * | peted joined #nim |
12:07:12 | zachcarter | invaders are coming! http://imgur.com/a/FXTyh |
12:13:02 | zachcarter | do they look too tightly packed? http://imgur.com/a/gV7Cq |
12:13:30 | zachcarter | I think so.. one laser shot as it stands could take out multiple invaders |
12:14:39 | zachcarter | http://imgur.com/a/P0vTR looks better |
12:14:49 | FromGitter | <gogolxdong> in any other cases it is compiled with `#define NIM_INTBITS 64` |
12:15:11 | FromGitter | <mratsim> You have some margin before reaching touhou level https://theskb.files.wordpress.com/2015/12/touhou.gif |
12:16:06 | FromGitter | <RSDuck> you could also position them in other formations(like a circle or other geometric bodies) |
12:20:18 | FromGitter | <gogolxdong> I'm not sure what happened |
12:48:37 | * | zachcarter quit (Read error: Connection reset by peer) |
12:48:59 | * | zachcarter joined #nim |
12:57:34 | * | vivus quit (Quit: Leaving) |
13:01:37 | FromGitter | <Varriount> Zachcarter: Generally the vertical and horizontal spacing is the same. |
13:02:31 | FromGitter | <Varriount> https://screenshots.en.sftcdn.net/en/scrn/25000/25176/chicken-invaders-1.jpg |
13:06:03 | FromGitter | <Varriount> @mratsim Could to explain your shadow copying question again? |
13:06:56 | FromGitter | <Varriount> *shallow |
13:07:34 | FromGitter | <mratsim> This is my data structure ⏎ ⏎ ```code paste, see link``` ⏎ ⏎ data is potentially a huge 300Mo to a few GB seq [https://gitter.im/nim-lang/Nim?at=58f4be198e4b63533dd81c7f] |
13:07:37 | federico3 | Being able to generate parser directly from ABNF would be really nice - http://marcelog.github.io/articles/abnf_grammars_in_elixir.html |
13:08:41 | FromGitter | <mratsim> It might be quite costly to copy this data structure when just refering to “data” is enough |
13:09:21 | FromGitter | <Varriount> Why is it an object? Make it a ref |
13:10:51 | FromGitter | <mratsim> mmmh, let me think |
13:11:14 | FromGitter | <Varriount> You aren't receiving any benefits from using an object type - sequences store their data in a separate memory chunk anyway |
13:11:53 | FromGitter | <mratsim> that’s why I used an object type, since it’s stored somewhere else anyway, why introduce another inderirection layer |
13:13:06 | FromGitter | <Varriount> Because an object type is copied on assignment. Any members that are strings or sequences have their days copied too |
13:13:18 | FromGitter | <Varriount> *data |
13:14:48 | FromGitter | <Varriount> Unless marked as shallow or wrapped in a reference type, strings and sequences can only ever be referenced in one location. |
13:16:08 | FromGitter | <mratsim> this is what I want for all the object fields, except for data. I’m still thinking over if it’s best to deep copy and expect the compiler/GC to free/reuse the memory when the original “Tensor” is not used |
13:16:49 | * | libman joined #nim |
13:17:34 | FromGitter | <mratsim> or if I should manage memory assignment/copy/etc manually in my library. |
13:17:43 | FromGitter | <Varriount> You *want* nearly everything in that object to be copied on assignment? |
13:18:19 | FromGitter | <Varriount> I mean, I guess uses could still wrap it in a ref |
13:18:33 | FromGitter | <Varriount> On their own |
13:19:32 | FromGitter | <mratsim> yes. concrete example: fr a 3 by 4 matrix, dimensions will hold @[4, 3], strides will hold [4, 1], offset will just be a pointer to data[0] and data will hold the actual data of the matrix |
13:20:37 | FromGitter | <mratsim> now if I transpose it. dimensions is @[3, 4], strides becomes @[1,4], data can stay the same. It’s just a different view of the same data |
13:21:38 | FromGitter | <Varriount> @mratsim If you keep it an object type, then only way to prevent the data attribute from being copied is to either mark it as shallow (which means data can't be modified) or make it a `ref seq` |
13:21:53 | FromGitter | <mratsim> now I want to update the original matrix at coordinate 0, 0: ⏎ ⏎ 1) if it was deep copied for transposition. no worries ⏎ 2) if it was shallow copied. there comes the issues. [https://gitter.im/nim-lang/Nim?at=58f4c174a0e485624211452b] |
13:23:33 | FromGitter | <Varriount> @mratsim I think you might be a bit confused. |
13:23:38 | FromGitter | <mratsim> So I’m evaluating the difficulty of using ref/shallow copy to have the best memory usage vs deep copying and trusting Him GC/compiler |
13:23:52 | FromGitter | <mratsim> probably :/ spend too much time thinking on it |
13:24:06 | FromGitter | <Varriount> Deep copying only occurs in special cases, usually involving threading. |
13:24:38 | FromGitter | <Varriount> Deep copying means copying the entire object, creating copies of reference types, etc |
13:25:00 | FromGitter | <Varriount> The entire tree, references and all, is duplicated. |
13:25:14 | * | yglukhov quit (Remote host closed the connection) |
13:25:25 | FromGitter | <Varriount> This is in contrast to default assignment/copying semantics |
13:26:13 | FromGitter | <Varriount> For reference types, only the reference itself (the pointer) is usually copied. The data the reference points to is not usually copied. |
13:27:02 | FromGitter | <mratsim> But just to make sure, currently my object wraps sequences. So if it is copied, every seq is copied, deeply so right ? |
13:27:11 | FromGitter | <Varriount> Yes. |
13:27:21 | FromGitter | <Varriount> Not deeply |
13:27:52 | FromGitter | <Varriount> If the sequences contain references, only the references themselves, not the data they point to, will be copied. |
13:28:26 | FromGitter | <mratsim> ok. |
13:29:44 | FromGitter | <Varriount> Will 'data' be modified multiple after the object has been created? |
13:30:30 | FromGitter | <mratsim> Some calculations like transpose can be done just by modifying dimensions and strides. |
13:31:21 | FromGitter | <mratsim> but data can be modified later on matrix A and transpose(A) independently |
13:31:40 | FromGitter | <Varriount> Then `data` will probably need to turn into a `ref seq` |
13:32:21 | FromGitter | <Varriount> If you don't want the sequence data to be copied along with the others. |
13:32:51 | FromGitter | <Varriount> Will the modifications change the length of `data`? |
13:33:00 | FromGitter | <mratsim> no they won't |
13:33:31 | FromGitter | <mratsim> but it can be all elements in data are multiplied by 2 for example |
13:34:03 | FromGitter | <Varriount> Hm, then maybe you can get away with just marking it as shallow() |
13:35:55 | FromGitter | <Varriount> @mratsim https://forum.nim-lang.org/t/2665#16491 |
13:36:10 | FromGitter | <mratsim> That’s what I was thinking, or just use {.shallow.} pragma. |
13:38:38 | FromGitter | <Varriount> You just need to make sure any sequence marked as shallow never has its size changed, otherwise weird things happen. |
13:39:24 | FromGitter | <mratsim> and then I guess, every time a proc is modifying the underlying data, I have to create a new object if there are more than 1 ref to it |
13:41:11 | FromGitter | <Varriount> Huh? |
13:42:18 | FromGitter | <mratsim> if I have A and B, referring to the same underlying data. I want to do 2*B, but not change A. |
13:43:11 | FromGitter | <Varriount> Yes. |
13:43:50 | FromGitter | <Varriount> But again, why not do that with all the sequence members? |
13:44:30 | FromGitter | <Varriount> @zacharycarter Did you pick any track in particular? |
13:45:43 | FromGitter | <mratsim> most assignements will need to modify dimensions and strides, and not data |
13:46:14 | FromGitter | <mratsim> is there a way to ask Nim GC if an object has the only reference to a ref seq ? |
13:47:36 | dom96 | federico3: write a Nimble package for it :) |
13:50:15 | FromGitter | <Varriount> @mratsim Unless a sequence is shallow, there will only ever be one reference to it. |
13:50:43 | FromGitter | <Varriount> @mratsim Remember, a sequence is backed by an array allocated from heap memory. |
13:51:13 | FromGitter | <Varriount> If the sequence has to change size, memory may need to be reallocated, and the pointer to the heap data will change. |
13:54:02 | FromGitter | <mratsim> I understand that. But if I use a ref object or a ref seq, I have to keep track of when the data can be modified directly (only one ref) or when I have to copy it (more than one ref to it) |
13:55:24 | FromGitter | <mratsim> Basically even if 2 arrays share the same underlying data for memory efficiency, as soon as there is a modification, they must behave as independant entities |
13:56:44 | FromGitter | <Varriount> Yes. |
13:56:45 | FromGitter | <mratsim> So either I pay the cost upfront by always copying on assignment. Advantage: it’s easier to reason about, disadvantage: I may use MB to GB more memory than necessary (if there is no subsequent “write”. |
13:57:45 | FromGitter | <Varriount> @mratsim By the way, are you sure these sequences shouldn't be arrays? |
13:58:00 | FromGitter | <mratsim> Or I defer the cost by only copying when writing. Advantage: memory is only allocated as needed. Disadvantage: every time I want to do modification I need to be very careful and track how many references I have to the underlying data (or better ask the GC to do it for me) |
13:58:13 | FromGitter | <Varriount> @mratsim Yes. |
13:58:36 | FromGitter | <Varriount> @mratsim Or you can always copy on write. |
13:59:15 | FromGitter | <mratsim> Ideally they should, or unchecked arrays, but I don’t want to have a static[int] in the type. I would get the same issues as when I was using andrea linalg’s library |
13:59:18 | FromGitter | <Varriount> The GC can give you a lower bound on the number of references for something, but not an upper bound. |
14:00:08 | FromGitter | <Varriount> @mratsim You might want to have dimensions and strides stored in one sequence then, to minimize small allocations |
14:01:12 | FromGitter | <mratsim> that won’t be pratical unfortunately. I would then need another variable to keep track of the dimensions of my ndarray |
14:02:13 | * | chemist69 quit (Ping timeout: 240 seconds) |
14:03:47 | FromGitter | <mratsim> let’s say ou want to access location Matrix[i,j]. It’s at stride[0]*i + stride[1] * j + offset |
14:06:07 | FromGitter | <mratsim> Thanks @Varriount. I will first continue up to a point I can use the library for real world scenarios (with the current seq deep assignment semantics). I will then benchmark the difference if I do shallow copy. |
14:07:12 | * | chemist69 joined #nim |
14:08:05 | FromGitter | <Varriount> I'd say that you should focus on implementation, then try optimization. |
14:12:22 | * | zachcarter quit (Read error: Connection reset by peer) |
14:12:43 | * | zachcarter joined #nim |
14:14:07 | FromGitter | <mratsim> Actually the basic functionality I need is done (besides GPU support). The last thing was how to implement matrix transposition, which can be done 3 ways: ⏎ ⏎ 1) keeping the ref to the same data, and exchanging dimensions and strides (lowest cost in memory space, no cost in reordering data) ⏎ 2) copying the data, and exchanging dimensions and strides. (high cost memory space, no cost in reordering data ) ⏎ 3) copyi |
14:14:07 | FromGitter | ... reordering the actual data (high cost in memory space, high cost in reordering the data) ... [https://gitter.im/nim-lang/Nim?at=58f4cdb28e4b63533dd86313] |
14:21:59 | * | devted joined #nim |
14:28:11 | * | gokr joined #nim |
14:30:41 | FromGitter | <Varriount> Hm. It depends on the data size your supporting |
14:31:37 | * | Vladar quit (Remote host closed the connection) |
14:33:08 | FromGitter | <mratsim> Most useful shape for neural networks is 224x224x3 (for RGB) of float32 (4bytes) |
14:33:16 | * | Tiberium joined #nim |
14:33:55 | * | bjz_ quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) |
14:34:30 | FromGitter | <mratsim> Ideally I would like to store at least 16 to 256 of those in a Tensor |
14:38:57 | * | arnetheduck quit (Ping timeout: 258 seconds) |
14:40:39 | FromGitter | <Varriount> Number one sounds best then. |
14:42:24 | FromGitter | <mratsim> And always Copy-on-Write to avoid keeping track of refcount I guess |
14:42:34 | FromGitter | <Varriount> Weekday would the length of the `dimensions` and `strides` members be for that kind of Tensor? |
14:42:47 | FromGitter | <Varriount> *What would |
14:43:29 | FromGitter | <mratsim> dimensions = (256, 224, 224, 3) |
14:44:15 | FromGitter | <mratsim> strides = (150 528,672,3,1) |
14:45:31 | FromGitter | <Varriount> Ok, so they wouldn't be nearly as long as the data sequence |
14:45:40 | FromGitter | <mratsim> if 1D Vector: length is 1, if 2d matrices, length is 2, if 3D Tensor, length is 3, if 4D like my example length is 4 |
14:46:19 | FromGitter | <mratsim> 5D can happen for videos (5th dimension will be time) or 3D images |
14:46:51 | FromGitter | <mratsim> 6D may happen in some years (3D videos) |
14:47:09 | FromGitter | <mratsim> 7D will not happen |
14:47:55 | FromGitter | <mratsim> so dimensions and strides’ length are always inferior to 7 |
14:50:27 | * | Jesin joined #nim |
14:51:38 | * | freevryheid joined #nim |
14:51:51 | * | freevryheid is now known as fvs |
15:12:42 | * | Arrrr joined #nim |
15:12:42 | * | Arrrr quit (Changing host) |
15:12:42 | * | Arrrr joined #nim |
15:26:11 | * | yglukhov joined #nim |
15:33:29 | * | Nobabs27 joined #nim |
15:41:26 | * | vlad1777d joined #nim |
15:42:50 | * | Vladar joined #nim |
15:48:39 | * | Nobabs25 joined #nim |
15:51:26 | * | Nobabs27 quit (Ping timeout: 252 seconds) |
15:51:50 | shashlick | does Nim use stdcall convention on Windows by default for all DLLs calls? |
15:52:51 | * | Tiberium quit (Remote host closed the connection) |
15:53:05 | * | Tiberium joined #nim |
15:57:59 | * | filcuc joined #nim |
15:58:07 | * | filcuc quit (Client Quit) |
16:06:25 | FromGitter | <gogolxdong> binutils work well |
16:07:53 | FromGitter | <gogolxdong> it's inspiring |
16:12:40 | * | Nobabs227 joined #nim |
16:13:29 | * | nsf quit (Quit: WeeChat 1.7) |
16:14:25 | * | sz0 joined #nim |
16:14:36 | FromGitter | <gogolxdong> I can run nimkernel on amd64 platform now :) |
16:15:05 | * | Nobabs25 quit (Ping timeout: 260 seconds) |
16:15:27 | FromGitter | <gogolxdong> be enlightened all of a sudden |
16:15:29 | * | Nobabs27 joined #nim |
16:17:32 | * | Nobabs227 quit (Ping timeout: 240 seconds) |
16:19:40 | * | Nobabs25 joined #nim |
16:22:05 | * | Nobabs27 quit (Ping timeout: 260 seconds) |
16:30:49 | * | Arrrr quit (Quit: Leaving.) |
16:31:52 | FromGitter | <Bennyelg> Hey Guys does anyone knows whos the nim book aims t o |
16:37:40 | * | Nobabs227 joined #nim |
16:40:23 | * | Nobabs25 quit (Ping timeout: 252 seconds) |
16:41:53 | * | fastrom quit (Quit: Leaving.) |
16:44:48 | * | fastrom joined #nim |
16:55:52 | Tiberium | Bennyelg: you mean target audience? |
16:56:39 | * | Nobabs25 joined #nim |
16:56:42 | Tiberium | there's first chapter for free |
16:57:31 | Tiberium | "This book is by no means a beginner’s book. k. It assumes that you have knowledge of at least one other programming language and that you have experience writing software in it. " from free chapter (1.4 ) |
16:59:07 | * | fastrom quit (Quit: Leaving.) |
16:59:23 | * | Nobabs227 quit (Ping timeout: 260 seconds) |
17:03:43 | * | fastrom joined #nim |
17:10:15 | * | yglukhov quit (Remote host closed the connection) |
17:11:07 | * | gokr quit (Ping timeout: 240 seconds) |
17:11:30 | dom96 | shashlick: you need to specify it via a pragma |
17:11:32 | dom96 | IIRC |
17:11:41 | * | Nobabs227 joined #nim |
17:14:15 | * | Nobabs25 quit (Ping timeout: 268 seconds) |
17:16:02 | * | fastrom quit (Quit: Leaving.) |
17:17:09 | * | fastrom joined #nim |
17:17:58 | Tiberium | gogolxdong can you please upload it to github? |
17:18:00 | Tiberium | maybe as a fork |
17:18:05 | Tiberium | I want to play with it too :) |
17:18:47 | FromGitter | <TiberiumPY> @gogolxdong please :) |
17:22:40 | * | Nobabs25 joined #nim |
17:25:10 | * | Nobabs227 quit (Ping timeout: 240 seconds) |
17:26:33 | * | zachcarter quit (Quit: zachcarter) |
17:27:05 | * | fastrom quit (Quit: Leaving.) |
17:28:33 | ldlework | Is there a way to reflect the parameters of an arbitrary function or method? |
17:28:39 | * | Nobabs227 joined #nim |
17:31:28 | * | Nobabs25 quit (Ping timeout: 260 seconds) |
17:31:40 | * | Nobabs25 joined #nim |
17:33:54 | * | Nobabs227 quit (Ping timeout: 240 seconds) |
17:36:25 | * | Nobabs25 quit (Client Quit) |
17:36:36 | shashlick | dom96: thanks |
17:39:00 | shashlick | dom96: how do you do it? I get an error for => proc BASS_ChannelGetData*(handle: DWORD, buffer: pointer, length: DWORD): DWORD {.stdcall, .importc, dynlib: basslib.} |
17:39:27 | demi- | the importc shouldn't have a leading `.` |
17:39:58 | Tiberium | wait, we're not the first to do nim cross-compilation via mingw |
17:40:14 | Tiberium | https://github.com/vegansk/nimtests/blob/b179e9f1ecf63cd7dcd21b8f64dab3559f0e7b6a/basic/crosscompile/nim.cfg |
17:42:41 | shashlick | demi-: thanks! that worked |
17:44:15 | shashlick | in fact, changing ChannelGetData() to stdcall got rid of the bass.dll crash I was seeing (Varriount, dom96) |
17:44:26 | shashlick | life's good :) |
17:44:29 | * | Trustable quit (Remote host closed the connection) |
17:45:32 | demi- | shashlick: the `{.` and `.}` enclose the pragmas, you just comma separate them within |
17:45:55 | demi- | the leading or trailing dots aren't significant to the pragmas themselves |
17:47:42 | shashlick | demi-: that helps |
17:48:07 | shashlick | how is stdcall different from the default calling convention that Nim uses? |
17:51:33 | FromGitter | <stisa> shashlick : not sure about the specifics, but there's some explanation here https://nim-lang.org/docs/manual.html#types-procedural-type |
17:52:07 | ldlework | shashlick: whathca makin |
17:52:48 | * | Nobabs27 joined #nim |
17:54:18 | shashlick | ldlework: I'm using Nim to read audio input (using BASS.dll) and intend doing some realtime analysis |
17:54:38 | ldlework | shashlick: ah cool. I thought maybe you were making some music production stuff. |
17:55:48 | shashlick | two ideas - one is beat detection which seems ambitious given i'm new to audio analysis, the other which I'm planning on starting with, is to identify the guitar chord i'm playing (most dominant low frequency) and play back a bass note that corresponds to it |
17:58:04 | * | rauss joined #nim |
17:59:01 | shashlick | basically automating my bass guitar guy |
17:59:24 | shashlick | only feasible now cause Nim is fast enough to do this, project was too ambitious for Python |
18:02:51 | * | bungoman_ joined #nim |
18:06:31 | * | Nobabs25 joined #nim |
18:06:33 | * | bungoman quit (Ping timeout: 240 seconds) |
18:09:23 | * | Nobabs27 quit (Ping timeout: 260 seconds) |
18:10:02 | * | fastrom joined #nim |
18:24:27 | * | sz0 quit (Quit: Connection closed for inactivity) |
18:28:32 | * | devted quit (Ping timeout: 240 seconds) |
18:35:51 | * | fastrom quit (Quit: Leaving.) |
18:36:55 | * | Araq joined #nim |
18:37:01 | dom96 | ahh cool. Yeah, the calling conventions are a real gotcha |
18:41:57 | FromGitter | <Varriount> Which is why C2Nim helps. It needs some love though. |
18:42:24 | dom96 | Would c2nim help in this situation? |
18:43:23 | FromGitter | <Varriount> It can detect calling conventions for regular function prototypes. |
18:43:29 | * | Nobabs227 joined #nim |
18:43:38 | FromGitter | <Varriount> I don't know about function typedefs though. |
18:43:45 | ldlework | Anyone bored and wanna do some graphics programming?! |
18:46:12 | * | Nobabs25 quit (Ping timeout: 258 seconds) |
18:51:06 | * | libman quit (Quit: Connection closed for inactivity) |
18:53:00 | * | xmonader2 is now known as xmonader |
18:59:32 | * | gokr joined #nim |
19:05:07 | * | gokr quit (Ping timeout: 240 seconds) |
19:09:28 | * | Nobabs25 joined #nim |
19:11:50 | * | Nobabs227 quit (Ping timeout: 245 seconds) |
19:12:09 | * | fastrom joined #nim |
19:13:47 | fvs | generics question, say I have a function proc ln10[T](x:T):T = ln(10.0), how can I make the 10.0 a float32 if x is float32? - I presume 10.0 is float64, right? |
19:14:11 | FromGitter | <mratsim> 1) T |
19:14:32 | * | Nobabs227 joined #nim |
19:14:56 | fvs | 10.0 is T? |
19:15:01 | FromGitter | <mratsim> no |
19:15:09 | FromGitter | <mratsim> instead of 10.0 use 10.T |
19:15:26 | FromGitter | <mratsim> if T is int it will do 10.int, if T is float64 it will do 10.float64 |
19:15:38 | fvs | thanks! |
19:16:18 | FromGitter | <mratsim> you should use let ln10: T = ln(10.T) if it’s inside a proc |
19:16:54 | * | Nobabs25 quit (Ping timeout: 240 seconds) |
19:19:44 | * | chemist69 quit (Ping timeout: 255 seconds) |
19:22:22 | * | chemist69 joined #nim |
19:34:56 | FromGitter | <Varriount> You can also do T(10.0) |
19:35:16 | FromGitter | <Varriount> I prefer that style. |
19:35:27 | * | fastrom quit (Quit: Leaving.) |
19:37:29 | * | Nobabs25 joined #nim |
19:40:02 | * | Nobabs227 quit (Ping timeout: 240 seconds) |
19:41:28 | * | Nobabs227 joined #nim |
19:44:28 | * | Nobabs25 quit (Ping timeout: 260 seconds) |
19:50:30 | FromGitter | <mratsim> I stumble upon that kind of conversion completely randomly, I don’t remember seeing that in the docs. Is it mentionned somewhere? |
19:50:37 | FromGitter | <mratsim> stumbled* |
20:00:53 | fvs | I like the 0.T but T(0.5) is required as 0.5T fails I guess. |
20:04:58 | fvs | got me thinking - is it at all possible to define a generic const,say const ι = [0.T, 1.T] |
20:07:28 | * | Nobabs25 joined #nim |
20:07:32 | * | nsf joined #nim |
20:08:24 | * | fastrom joined #nim |
20:08:42 | * | Tiberium quit (Remote host closed the connection) |
20:09:56 | * | Nobabs227 quit (Ping timeout: 252 seconds) |
20:10:32 | * | Nobabs227 joined #nim |
20:13:33 | * | Nobabs25 quit (Ping timeout: 260 seconds) |
20:14:30 | * | Nobabs25 joined #nim |
20:16:53 | * | Nobabs227 quit (Ping timeout: 255 seconds) |
20:17:32 | * | Nobabs227 joined #nim |
20:20:10 | * | zachcarter joined #nim |
20:20:23 | * | Nobabs25 quit (Ping timeout: 252 seconds) |
20:22:30 | * | Nobabs25 joined #nim |
20:24:59 | * | Nobabs227 quit (Ping timeout: 255 seconds) |
20:27:02 | * | fastrom quit (Quit: Leaving.) |
20:35:01 | * | Trustable joined #nim |
20:38:32 | * | Nobabs227 joined #nim |
20:41:12 | * | Nobabs25 quit (Ping timeout: 258 seconds) |
20:49:31 | * | Nobabs25 joined #nim |
20:50:13 | * | nsf quit (Quit: WeeChat 1.7) |
20:52:17 | * | Nobabs227 quit (Ping timeout: 252 seconds) |
20:58:28 | * | Trustable quit (Remote host closed the connection) |
21:00:28 | * | Nobabs227 joined #nim |
21:03:03 | * | Nobabs25 quit (Ping timeout: 258 seconds) |
21:04:31 | * | Nobabs25 joined #nim |
21:07:18 | * | Nobabs227 quit (Ping timeout: 260 seconds) |
21:07:26 | FromGitter | <Varriount> Benchmarks are hard: https://forum.nim-lang.org/t/2917 |
21:08:30 | * | fvs left #nim ("leaving") |
21:10:01 | * | Nobabs227 joined #nim |
21:12:41 | * | Nobabs25 quit (Ping timeout: 255 seconds) |
21:14:56 | * | Nobabs25 joined #nim |
21:17:29 | * | Nobabs227 quit (Ping timeout: 260 seconds) |
21:17:30 | * | bjz joined #nim |
21:18:27 | FromGitter | <Varriount> @mratsim https://nim-lang.org/docs/manual.html#statements-and-expressions-type-conversions |
21:18:59 | FromGitter | <Varriount> The manual knows all! (That araq puts in it) |
21:24:07 | * | krux02 joined #nim |
21:25:21 | krux02 | good evening |
21:26:23 | ldlework | hi krux02 |
21:26:27 | ldlework | how are ya |
21:26:38 | * | rauss quit (Quit: WeeChat 1.7) |
21:26:46 | krux02 | hi, I'm fine |
21:26:54 | ldlework | good to hear |
21:27:39 | krux02 | did you find out the problem why the OpenGl context could not be created? |
21:28:33 | ldlework | No |
21:28:55 | ldlework | krux02: I've been trying to convince federico3 to bind sdl-gpu |
21:29:25 | ldlework | Which looks very simple. A simple sprite API that does ordered batchin behind the scenes (like sfml) and also a simple shader API |
21:29:40 | krux02 | I really tried to be minimalistic in what I used for that library and now it turns out that this did not give me very wide cross platform support |
21:30:15 | ldlework | krux02: I can understand your frustration |
21:30:25 | ldlework | krux02: it may just be a situation with optimus though |
21:30:41 | ldlework | however, I do not have trouble with other opengl applications |
21:30:57 | ldlework | for example, pydsigner and I both experience broken vsync |
21:31:03 | ldlework | even when vsync is on, we experience tearing |
21:31:07 | pydsigner | Hiya krux02 |
21:31:16 | ldlework | so we probably just have a shit stack that you shouldn't worry too much about |
21:31:22 | krux02 | yea, but even if you don't use the nvidia driver at all, you should be able to run the demos, because the Intel drivers can do everything necessary |
21:31:25 | krux02 | I looked that up |
21:31:34 | ldlework | Sure but how to force their use? |
21:31:41 | ldlework | Its confusing topic for me. |
21:32:11 | krux02 | well when I used that I had to start programs with optirun |
21:32:35 | krux02 | was a bit problematic is some cases when the program I tried to run had it's own startup script |
21:32:35 | ldlework | pydsigner: were you able to get any of his demos working even with optirun? |
21:32:46 | ldlework | you said you just got a black screen? |
21:32:54 | pydsigner | Yeah |
21:33:01 | pydsigner | That was without Nvidia |
21:33:18 | krux02 | by the way I have intel on this computer without Nvidia |
21:33:24 | krux02 | and I use software rendering |
21:33:42 | ldlework | I mean taking on the complexity and only getting software rendering... |
21:33:50 | krux02 | you should not need software rendering, but my demos are software rendering compatible even performance wise |
21:33:54 | ldlework | krux02: have you seen sdl-gpu |
21:33:55 | krux02 | everything is super efficiest |
21:33:59 | krux02 | nope |
21:34:14 | ldlework | https://github.com/grimfang4/sdl-gpu |
21:36:03 | * | Nobabs227 joined #nim |
21:36:35 | ldlework | I just built it and ran its demos |
21:36:37 | ldlework | it works |
21:36:38 | * | gokr joined #nim |
21:36:41 | krux02 | ok I got that googled |
21:37:33 | ldlework | federico3: ^ |
21:37:36 | krux02 | I did not look too much into details, but as far as I understand it, it tries to provide an immediate mode API, but under the hood it does some sprite gathering and then does bulk operations |
21:37:45 | ldlework | yes |
21:37:48 | ldlework | just like sfml |
21:37:54 | ldlework | basically what I hoped you'd build for me |
21:37:58 | krux02 | yea I did not use sfml yet |
21:38:03 | ldlework | so I didn't have to deal with opengl myself |
21:38:25 | ldlework | So now we just need a nim binding :) |
21:38:44 | krux02 | I just read things about it and then didn't see a particular reason why it should be better than just sdl2 |
21:38:48 | * | Nobabs25 quit (Ping timeout: 268 seconds) |
21:38:54 | ldlework | SDL2 does no optimization |
21:39:06 | ldlework | it uses opengl textures but does not use them efficiently |
21:39:17 | ldlework | you'd have to use raw opengl to build that yourself and not use the SDL2 surfaces |
21:39:35 | ldlework | So its faster than sdl1, but no where close to sfml |
21:39:35 | krux02 | well it is hard to say what usage is efficiert and what usage is not efficiert |
21:39:43 | ldlework | eh? |
21:39:59 | ldlework | I'm saying SDL2 does no logic on the backend. It simply uses opengl backed textures. |
21:40:08 | krux02 | for gpu sdl or sfml you still need to use the immediate mode correctly in order for it to be able to optimize it |
21:40:29 | krux02 | I think you should use it, if it provides for you the path of least resistence |
21:40:31 | ldlework | Not with sdl2, it doesn't support doing any optimization at all |
21:40:46 | zachcarter | ooo gpu talk |
21:41:10 | krux02 | I want to provide a solution that is the path of least resistence for a lot of people, but when it is clearly not the path of least resistence I definitively need to put some work into it. |
21:41:16 | zachcarter | btw I looked at binding sdl-gpu and gave up when I knew very little about binding, I was going to write bindings for it for an article I was writing but never got around to it |
21:41:33 | ldlework | It looks very useful. |
21:41:47 | krux02 | well you can write a binding for it |
21:41:48 | ldlework | Compared to say BGFX which can introduce a lot of complexity into your implementation. |
21:41:48 | FromGitter | <Varriount> zachcarter: Did you pick a song? |
21:42:02 | ldlework | krux02: are you so sure? :) |
21:42:03 | zachcarter | bgfx is useful |
21:42:14 | krux02 | as long as it has not too many macros and unbindable stuff, writing binidings is pretty straight forward |
21:42:14 | zachcarter | it does more than sdl gpu does, they’re two totally different libs |
21:42:27 | zachcarter | Varriount: not yet, I found a few I liked though |
21:42:28 | ldlework | I mean they both implement ordered batching no? |
21:42:32 | zachcarter | no |
21:42:37 | ldlework | Pretty sure... |
21:42:44 | zachcarter | they don't |
21:42:50 | zachcarter | you still have to batch draw calls yourself with bgfx |
21:42:53 | * | xmonader quit (Ping timeout: 252 seconds) |
21:42:57 | zachcarter | sdl gpu is higher level than bgfx |
21:43:13 | krux02 | by the way I prefer manual batching in an API |
21:43:14 | ldlework | "bgfx is using sort-based draw call bucketing. This means that submission order doesn’t necessarily match the rendering order, but on the low-level they will be sorted and ordered correctly." |
21:43:21 | krux02 | it is more honest about what is actually going on |
21:43:23 | zachcarter | yes that’s different from what sdl gpu does |
21:43:29 | zachcarter | that’s how the internal renderer works |
21:43:40 | zachcarter | it doesn’t mean it does draw call batching like sdl2 gpu does |
21:43:44 | ldlework | OK you are saying "they don't do that" |
21:44:01 | krux02 | I looked at bgfx, too |
21:44:07 | zachcarter | that’s right bgfx doesn’t batch your draw calls |
21:44:16 | zachcarter | like sdl gpu does |
21:44:19 | krux02 | it looks very intelligently designed |
21:44:42 | krux02 | worth investing time |
21:44:55 | ldlework | "On the high level this allows more optimal way of submitting draw calls for all passes at one place, and on the low-level this allows better optimization of rendering order." |
21:45:01 | ldlework | What is batching, if not exactly this? |
21:45:50 | zachcarter | for instance |
21:46:13 | zachcarter | you are drawing 12 vertices |
21:46:22 | zachcarter | 2 sprites |
21:46:34 | zachcarter | 6 indices |
21:46:53 | ldlework | ? |
21:46:55 | zachcarter | sdl gpu would submit them in one draw call |
21:47:07 | zachcarter | bgfx would not without you writing that abstraction |
21:47:33 | zachcarter | sdl gpu is like a sprite batch |
21:47:43 | zachcarter | you say start drawing |
21:47:46 | zachcarter | make your draw calls |
21:47:48 | ldlework | What is it describing then in the above quotes? |
21:47:49 | zachcarter | say finish drawing |
21:48:03 | zachcarter | it’s describing the way the internal renderer works it uses submission bucket based rendering |
21:48:22 | zachcarter | with bgfx you don’t worry about the order of your draw calls you just submit them and the underlying render buckets and orders them |
21:48:31 | zachcarter | you use views for ordering things |
21:48:41 | ldlework | .... |
21:48:45 | zachcarter | it’s a much lower level concept than what sdl2 gpu describes |
21:48:54 | zachcarter | I can’t be any clearer in my explanation |
21:49:19 | zachcarter | because I don’t know enough about bucket based draw sorting to explain this to you |
21:49:28 | zachcarter | I can link you to articles though if you’re interested in reading |
21:49:32 | ldlework | I just read the one linked |
21:49:40 | ldlework | It sounds like you put draws into buckets manually |
21:49:56 | ldlework | were as in sdl-gpu, it automatically organizes them based on sprite state |
21:50:11 | ldlework | Is that really drastically different enough to stage the distinction? |
21:50:17 | zachcarter | it’s very different |
21:50:26 | ldlework | Can you expound the distinction? |
21:50:28 | zachcarter | working with bgfx is akin to working with opengl |
21:50:32 | zachcarter | or direct x |
21:50:35 | ldlework | I don't need analogies lol |
21:50:38 | zachcarter | so there’s no batching out of the box |
21:50:49 | zachcarter | your’e still using index and vertex buffers |
21:50:59 | zachcarter | and if you want batching you ahve to impelent it yourself |
21:51:12 | ldlework | But not really though |
21:51:15 | * | Vladar quit (Quit: Leaving) |
21:51:20 | ldlework | You simply have to utilize the batching bgfx provides? |
21:51:50 | zachcarter | I’ve stated I don’t know how many times bgfx doesn’t do this for you |
21:51:56 | ldlework | The bucketing, which then results in the same kind of single-call-for-multiple-artifacts that sdl-gpu is doing automatically? |
21:52:22 | ldlework | You haven't clarified about what I'm asking though. |
21:52:35 | ldlework | BGFX appears to offer you some kind of functionality allowing you to bucket your drawing right? |
21:52:45 | ldlework | It comes with that, you don't implement the bucketing that it advertises that it does do you? |
21:52:57 | ldlework | So the distinction is that you have to actively engage this mechanism, you don't have to built it from scratch. |
21:53:01 | * | Nobabs25 joined #nim |
21:53:21 | zachcarter | draw call bucketing is different from sprite batching |
21:53:47 | zachcarter | http://realtimecollisiondetection.net/blog/?p=86 |
21:53:49 | ldlework | Sprite batching is surely utilizing the same exact optimization no? |
21:54:05 | zachcarter | no |
21:54:10 | ldlework | That is, sorting artifacts into buckets as to unify their draw calls |
21:54:22 | ldlework | ...OK then I have no idea what sprite batching is then :( |
21:54:46 | zachcarter | every sprite is 6 vertices |
21:54:51 | zachcarter | forming 2 triangles |
21:54:59 | zachcarter | if you use indices 3 verts |
21:55:00 | zachcarter | 6 indices |
21:55:08 | zachcarter | if you batch your draw calls |
21:55:11 | ldlework | What about the texture? |
21:55:21 | zachcarter | sure you can have tex coords and colors and all that too |
21:55:32 | * | Nobabs227 quit (Ping timeout: 240 seconds) |
21:55:38 | ldlework | OK so you're saying since all sprites are exactly the same format, they can fit into a single bucket |
21:55:38 | zachcarter | but the point is if you’r ebatching your draw calls |
21:55:52 | zachcarter | or into a buffer as OpenGL refers to them as |
21:55:58 | ldlework | So |
21:56:04 | ldlework | If I were using BGFX |
21:56:11 | ldlework | I could use the bucketing that BGFX provides |
21:56:19 | ldlework | to put all my sprite draws into a single draw call |
21:56:21 | ldlework | yes or no? |
21:56:23 | zachcarter | no |
21:56:44 | ldlework | because? |
21:56:46 | zachcarter | bgfx provides an api abstraction over opengl, direct x etc |
21:56:51 | zachcarter | because it works the same way as if you were using opengl |
21:57:11 | ldlework | so when the readme says those above quotes |
21:57:12 | zachcarter | the bucketing is how he is ordering around draw call data |
21:57:22 | zachcarter | if you look at the bgfx api |
21:57:23 | ldlework | that have absolutely no significance or relation to this coversation? |
21:57:36 | ldlework | you cannot actually interact with this bucketing feature? |
21:57:40 | zachcarter | no |
21:57:46 | zachcarter | it’s internal to the renderer |
21:58:01 | ldlework | So what exactly does the renderer sort and bucket? |
21:58:04 | zachcarter | you make draw calls the same way you amke draw calls in direct x or open gl or whatever |
21:58:06 | zachcarter | draw call data |
21:58:07 | ldlework | If you implement your own sprite batching |
21:58:19 | ldlework | And you use BGFX's renderer API to implement that sprite batching |
21:58:23 | zachcarter | yes |
21:58:33 | ldlework | what does BGFX's renderer do internally that reflects the advertisements above? |
21:58:39 | ldlework | in addition to whatever you've already done? |
21:58:42 | zachcarter | you’ dhave to dig into the renderer code |
21:58:45 | ldlework | lol |
21:58:53 | ldlework | So its possible you don't know? |
21:59:09 | zachcarter | I do know because I’ve implemented sprite batching with bgfx |
21:59:13 | zachcarter | I don’t know how OpenGL interacts with the hardware |
21:59:21 | zachcarter | not on a line by line basis |
21:59:21 | ldlework | Sure but what is BGFX doing on your behalf? |
21:59:24 | zachcarter | I understand the pipeline |
21:59:58 | zachcarter | bgfx is providing a single API that will call whatever underlying graphics API makes sense per the device / platform you’re runnin gon |
22:00:01 | ldlework | From what you've said, BGFX is both a unifying pure-graphics API, but also a renderer, with internals that does draw call batching. |
22:00:06 | zachcarter | amongst other things |
22:00:16 | ldlework | I don't see how you can't see the mututal exclusivitiy |
22:00:18 | zachcarter | you keep saying it does draw call batching |
22:00:23 | zachcarter | I keep saying it doesn’t |
22:00:32 | zachcarter | I say it does draw call sorting and bucketing |
22:00:38 | zachcarter | and I linked to an article explaining that concept |
22:00:48 | ldlework | I mean |
22:01:01 | * | Nobabs227 joined #nim |
22:01:01 | ldlework | by putting artifacts into buckets |
22:01:08 | ldlework | and running single draw calls on them |
22:01:11 | ldlework | that's batching dude |
22:01:32 | zachcarter | fine they do the same thing |
22:01:32 | ldlework | something is putting those things into those buckets, and saving draw calls because of it |
22:01:38 | zachcarter | you’ve proven it |
22:01:45 | ldlework | If you are doing "sprite batching" yourself, using this "pure graphics api" |
22:01:51 | ldlework | IE, saving your own draw calls |
22:01:55 | ldlework | Then what is the "renderer" doing? |
22:02:11 | ldlework | I'm just exploring the logic of what you're saying so I can understand. It seems contradictory. I'm not trying to win. |
22:02:17 | ldlework | I don't understand your description. |
22:03:10 | zachcarter | Please read that article I linked |
22:03:24 | ldlework | I read it |
22:03:41 | * | Nobabs25 quit (Ping timeout: 260 seconds) |
22:03:46 | zachcarter | okay so that is quite different from what I’m describing |
22:04:10 | * | Nobabs27 joined #nim |
22:04:26 | zachcarter | this is after your draw calls are already submitted to the renderer that this takes place |
22:04:54 | zachcarter | that what the article describes happens |
22:04:59 | zachcarter | you have no control over that |
22:05:01 | ldlework | OK I think I get that now |
22:05:11 | zachcarter | so that’s what bgfx does internally |
22:05:21 | zachcarter | sdl gpu would be like a layer sitting on top of bgfx |
22:05:23 | ldlework | It is literally keying low-level draw calls based on their parameters, then sorting them. |
22:05:27 | zachcarter | right |
22:05:57 | zachcarter | so like all these draw calls share the same material |
22:05:59 | * | Nobabs227 quit (Ping timeout: 252 seconds) |
22:05:59 | zachcarter | bucket them |
22:06:04 | ldlework | It doesn't know how the draw calls came to be |
22:06:12 | zachcarter | exactly |
22:06:16 | ldlework | so you could combine approaches, which is what you were saying above |
22:06:43 | zachcarter | yeah the kind of batching that sdl gpu does is essentially batching draw calls before they go to the renderer |
22:06:53 | zachcarter | so that any calls that share the same texture |
22:06:57 | ldlework | zachcarter: and to further prove my clarity, merely bucketing them is not "merge them into a single draw call" |
22:07:00 | zachcarter | are submitted in a big batch |
22:07:08 | zachcarter | right |
22:07:10 | ldlework | its simply for ordering because that's some how more efficient for the cpu itself |
22:07:13 | zachcarter | it’s not batching them iit’s ordering them |
22:07:14 | zachcarter | exactly |
22:07:22 | ldlework | I'm glad you didn't storm off :) |
22:07:49 | ldlework | I think my confusion makes sense if you apply that language a level up and conflate it with batching as I did. Thanks for sticking it out. |
22:07:50 | zachcarter | no haha, sorry I’m not the best at clarifying my thoughts at times |
22:08:11 | zachcarter | sure thing, it’s not the simplest topic either to discuss over the interwebz |
22:08:27 | ldlework | But now I'm more certain that I want sdl-gpu |
22:08:35 | zachcarter | I think sdl gpu is perfect for your project |
22:08:37 | ldlework | Though BGFX is the path to javascript frontend right? |
22:08:39 | zachcarter | and a lot of people’s projects |
22:08:50 | ldlework | Or is it |
22:08:52 | zachcarter | bgfx would be the path to getting things on as my platforms as possible |
22:09:05 | zachcarter | things that don’t support opengl |
22:09:12 | zachcarter | also for vulkan / metal support going forward |
22:09:33 | zachcarter | but like you said you pay in complexity |
22:09:46 | ldlework | krux02: so nim-gpu ontop of bgfx, whatdya say? |
22:09:51 | ldlework | XD |
22:10:00 | zachcarter | that would be nice |
22:10:02 | * | Nobabs25 joined #nim |
22:10:02 | zachcarter | I’d use it |
22:10:10 | ldlework | zachcarter: I know I've said it before so I'll stop saying since I know its annoying |
22:10:17 | ldlework | But you should break up frag a bit maybe.. |
22:10:39 | zachcarter | oh take out the renderer? |
22:10:53 | ldlework | Yeah you have multiple useful parts that could be useful on their own I think |
22:11:07 | zachcarter | hrm I might be able to |
22:11:19 | zachcarter | I’m very close to finishing the first demo with it |
22:11:33 | zachcarter | the one thing I’m scared of having happen, is it turning into Piston xD |
22:11:33 | * | nsf joined #nim |
22:11:42 | ldlework | I think I'd be more open to approaching Frag to work on components rather than having to take on the whole framework. |
22:12:00 | zachcarter | point well taken |
22:12:15 | zachcarter | I know it’s not the simplest of projects to just like clone and get going with |
22:12:19 | ldlework | Or utilizing them to stand up my own libraries, etc (therefore encouraging my to contriubute) |
22:12:26 | zachcarter | right |
22:12:32 | ldlework | zachcarter: yeah and my goal is to use nim to produce the most aesthetic api possible |
22:12:35 | * | Nobabs27 quit (Ping timeout: 252 seconds) |
22:12:47 | ldlework | to explore that space, rather than make an engine someone might use to make money some day :) |
22:13:18 | FromGitter | <Varriount> Physics is generally separate from a graphics/game framework |
22:13:21 | ldlework | (not that they are nessecarilly exclusive but obvious difficulty there) |
22:13:32 | zachcarter | sure |
22:13:35 | FromGitter | <Varriount> Sounds+Graphics+Input handling is usually sufficient. |
22:13:50 | ldlework | Varriount, I think physics is basically decoupled in frag, there's just some integration which makes sense. |
22:13:59 | ldlework | Its nice when your sprites are easy to get bouncing off each other |
22:14:08 | zachcarter | aye physics is an optional module |
22:14:10 | zachcarter | spine is optional |
22:14:11 | ldlework | If you have to write that integration between frag's sprites an chimpmunk then the value goes down |
22:14:58 | ldlework | but I could see things like frag-bgfx frag-gpu frag-ecs frag-audio, etc |
22:15:21 | ldlework | Nim needs Interfaces... |
22:18:02 | * | Nobabs227 joined #nim |
22:18:25 | krux02 | ldlework: what do you mean with interfaces? |
22:18:54 | krux02 | If you mean go like interfaces, I wrote a macro for that, it's on the forums |
22:19:10 | krux02 | I think andrea once put it on github |
22:19:26 | ldlework | I saw someone's interface and it causes a stack overflow when testing against things that don't satisfy the interface |
22:19:30 | * | bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…) |
22:19:40 | ldlework | It was the one andrea put on github |
22:20:48 | * | Nobabs25 quit (Ping timeout: 260 seconds) |
22:21:00 | * | Nobabs25 joined #nim |
22:23:04 | FromGitter | <Varriount> c-c-c-concepts! |
22:23:33 | * | Nobabs227 quit (Ping timeout: 240 seconds) |
22:25:00 | ldlework | Varriount, yeah but its just a fantasy |
22:25:32 | ldlework | That is for concepts to grow vtable support or whatever it is |
22:26:28 | ldlework | I have been working in a serious golang codebase the last few weeks |
22:26:33 | ldlework | Its been pretty shocking |
22:27:02 | ldlework | I figured that, with interfaces as its basically only good feature, that I would find tons of things implementing against interfaces but it doesn't seem that way. |
22:32:09 | * | xet7 quit (Quit: Leaving) |
22:35:56 | * | nsf quit (Quit: WeeChat 1.7) |
22:37:29 | * | Nobabs227 joined #nim |
22:40:02 | * | Nobabs25 quit (Ping timeout: 240 seconds) |
22:46:40 | * | vendethiel quit (Quit: q+) |
22:53:21 | ldlework | Varriount, do you know if there is a way to reflect the arguments or arbitrary procs? |
22:53:58 | FromGitter | <Varriount> What kind of reflection? |
22:54:17 | ldlework | any kind? |
22:54:21 | FromGitter | <Varriount> Can you give me pseudocode of what you want to do? |
22:54:41 | ldlework | I want to look up a function by a string, and then get a list of its parameters including type info |
22:54:54 | ldlework | Varriount, I'm wondering if an IoC container is even possible in Nim |
22:55:04 | FromGitter | <Varriount> IoC? |
22:55:16 | ldlework | IoC container |
22:55:21 | ldlework | err inversion of control |
22:56:20 | ldlework | Varriount, if you can tell an IoC container how to produce instances of the depenency of a client, it can then create instances of that client. |
22:56:24 | FromGitter | <Varriount> ldlework: Nim has getImpl, but I believe that only works for types. A single symbol can have multiple procedures. |
22:56:38 | ldlework | Oh right interesting... |
22:56:51 | ldlework | Well that's OK though |
22:57:20 | FromGitter | <Varriount> The general way to go about this is to have the user mark the procedures/types they want, and generate a structure from that. |
22:57:29 | ldlework | That's not the general way to go about it at all. |
22:57:39 | ldlework | Are you familiar with the implementations of IoC containers? |
22:58:03 | FromGitter | <Varriount> It's the general way of generating encompassing data structures from macros. |
22:58:19 | FromGitter | <Varriount> Not something IoC-related. |
22:58:27 | ldlework | Sure, maybe in Nim |
22:58:28 | * | Nobabs25 joined #nim |
22:58:38 | ldlework | In other languages no such explicit demarcation is required |
22:59:54 | FromGitter | <Varriount> In Java, don't you generally need to inherit or extend a base interface? |
22:59:57 | ldlework | With IoC containers, auto-wiring is the most popular modern approach which combines reflection and convention over configuration |
23:00:48 | ldlework | I know nothing about Java, it is not the language I look at when trying to figure out where the ideals might be. |
23:01:01 | ldlework | I would choose the state of art in C# over Java everytime |
23:01:08 | * | Nobabs227 quit (Ping timeout: 255 seconds) |
23:01:49 | ldlework | But I guess I'm hearing "No" which is kinda what I expected. |
23:05:53 | FromGitter | <Varriount> ldlework: I know what the general technique of inversion of control are. I don't know what you mean by an "IoC" container, since "IoC" is a general concept. |
23:06:29 | * | Nobabs227 joined #nim |
23:08:07 | ldlework | Sure but IoC containers are not. But confusion over the terminiology is part and parcel. |
23:09:14 | * | Nobabs25 quit (Ping timeout: 255 seconds) |
23:09:26 | * | Nobabs27 joined #nim |
23:11:03 | FromGitter | <Varriount> Nim has generics, concepts, methods, and references. It has macros that can automate the construction of types. It also has runtime type reflection (https://nim-lang.org/docs/typeinfo.html) |
23:11:29 | * | Nobabs227 quit (Ping timeout: 255 seconds) |
23:11:53 | FromGitter | <Varriount> I honestly don't understand what you want, because I'm not familiar enough with the terminology you're using. |
23:12:36 | FromGitter | <Varriount> An example scenario or situation would help. |
23:12:58 | ldlework | I think you're being disingenuous |
23:13:11 | ldlework | What part of "find a function by its name, and know its parameters" is strange terminology? |
23:13:42 | ldlework | Just because you place knowing the whole domain of my problem infront of answering that simple question, does not make my language use opaque. |
23:15:04 | FromGitter | <Varriount> Use macros.getImpl |
23:16:41 | ldlework | Varriount, is there a way to get a list of all defined procs though? |
23:17:28 | ldlework | What does macros.getImpl do in face of proc overloads? I suppose I can try that one out myself. |
23:17:44 | FromGitter | <Varriount> I don't know. I can't test out anything at the moment. |
23:18:06 | FromGitter | <Varriount> ```retrieve the implementation of a symbol s. s can be a routine or a const.`````` |
23:18:41 | ldlework | Yeah I read that. Does it answer the question of what happens in the face of an overload? |
23:18:47 | FromGitter | <Varriount> No. |
23:18:58 | ldlework | Why is everyone hasslin me today? lol |
23:20:34 | FromGitter | <Varriount> ldlework: To be blunt: Because I'm trying to work, you're asking questions that I can't confidently answer, and I don't want you to blame Nim because a strategy that works with C# and Java can't be used the exact same way in Nim. |
23:21:03 | FromGitter | <Varriount> A good question to ask is "How would this be done in C++ or C?" |
23:21:52 | ldlework | My question is specifically if Nim supports that strategy though. |
23:23:13 | ldlework | Since Nim has some amount of reflection, it seems like a perfectly rational inquiry to know the extents of Nim's reflection facilities. |
23:23:23 | FromGitter | <Varriount> Yes. But it seems to me that you're trying to use an implementation method for that strategy which works best with C# |
23:23:35 | ldlework | Trying to guess what I really care about is what leads to these estrangements not the fact that you're busy. |
23:24:04 | ldlework | I'm literally trying to see if Nim supports that methodology. It could be the case that it did. |
23:24:07 | ldlework | Should I have just said |
23:24:24 | ldlework | Well, I learned this technique from C#, so obviously it is only ideal for C# and I shouldn't even try to apply these concepts anywhere else. |
23:24:37 | ldlework | I mean what the hell are you getting at. I asked a specific question and you're playing therapist. |
23:24:41 | ldlework | I thought you had no time. |
23:27:10 | * | peted quit (Ping timeout: 240 seconds) |
23:27:11 | FromGitter | <Varriount> Sorry |
23:27:34 | ldlework | If we just happened to live in a world where Nim had stronger reflection capabilities, you would have just went "Oh yeah, the thing to get all defined names is here, and here's how you look up an implementation by name." And I would've went "Oh cool." |
23:27:35 | FromGitter | <Varriount> I should have just answered the question. |
23:27:45 | ldlework | Just because it doesn't happen to have them doesn't mean I asked the wrong question, or in the wrong way. |
23:27:52 | ldlework | Ya, you did which I appreciate. |
23:28:54 | FromGitter | <gogolxdong> @TiberiumPY I am glad to do so. |
23:29:54 | FromGitter | <Varriount> ldlework: So, does getImpl work on procedures? |
23:29:56 | ldlework | Varriount, given that it doesn't have the services to do that way, I can still appreciate the other stuff you said too. |
23:30:05 | ldlework | Ah lets find out |
23:34:28 | ldlework | eh I don't know how to use it properly, https://glot.io/snippets/ep1c77l88z |
23:40:00 | * | Nobabs25 joined #nim |
23:41:01 | FromGitter | <Varriount> ldlework: I'll look at it later |
23:42:08 | * | osterfisch is now known as qwertfisch |
23:42:32 | * | Nobabs27 quit (Ping timeout: 240 seconds) |
23:42:49 | FromGitter | <Varriount> @araq How would I go about using getImpl to get the parameters of a function? |
23:48:46 | * | peted joined #nim |
23:51:08 | FromGitter | <stisa> Idlework : with a single `foo` https://glot.io/snippets/ep1cl57qk4 , not sure if it can work with overloaded procs |
23:52:03 | FromGitter | <Varriount> For overloads you need to resolve the symbol somehow |
23:52:28 | ldlework | main.nim(14, 6) Error: type mismatch: got (proc (bar: float): float{.noSideEffect, gcsafe, locks: 0.} | proc (bar: int): int{.noSideEffect, gcsafe, locks: 0.}) |
23:52:30 | ldlework | but expected one of: |
23:52:32 | ldlework | macro print(node: typed): untyped |
23:53:11 | ldlework | the macro we're writing should be doing the resolution |
23:53:38 | ldlework | in the domain of my problem, by looking at the parameters and seeing which it can satisfy. Also which returns the right type, etc. |
23:53:42 | FromGitter | <Varriount> Perhaps you can use output from https://nim-lang.org/docs/macros.html#bindSym,string,BindSymRule ? |
23:54:10 | FromGitter | <Varriount> "If rule == brClosed either an nkClosedSymChoice tree is returned or nkSym if the symbol is not ambiguous. " |
23:54:43 | ldlework | Interesting |
23:54:47 | FromGitter | <Varriount> Or in other words, if an nkClosedSymChoice tree is returned, the symbol is ambiguous |
23:55:19 | FromGitter | <Varriount> I'd look to see if an nkClosedSymChoice tree has the various pieces of data you need |
23:56:58 | * | Nobabs227 joined #nim |
23:59:17 | * | Nobabs25 quit (Ping timeout: 252 seconds) |