<< 17-04-2017 >>

00:10:05*Guest42307 joined #nim
00:11:32FromGitter<Varriount> @dom96: I finally got around to fixing NimLime breaking multi-selection. :D
00:12:25FromGitter<Varriount> Now I just need to find a good input combo for suggestions/lookups
00:22:02*libman joined #nim
00:29:09*Navajo joined #nim
00:29:15*Navajo left #nim (#nim)
00:32:28FromGitter<Varriount> @araq Does the compiler do any static analysis to optimize heap memory allocations?
01:03:11*Guest42307 quit (Remote host closed the connection)
01:05:19*devted quit (Quit: Sleeping.)
01:26:11*zachcarter quit (Quit: zachcarter)
01:32:29FromGitter<Varriount> @zacharycarter You might find https://modarchive.org/index.php?request=view_by_license&query=publicdomain a good place to look for video game music
01:32:47FromGitter<Varriount> Especially for anything retro-sounding.
01:34:48*yglukhov joined #nim
01:39:29*yglukhov quit (Ping timeout: 252 seconds)
01:43:23*bjz_ quit (Quit: Textual IRC Client: www.textualapp.com)
01:45:57*bjz joined #nim
01:48:27*chemist69 quit (Disconnected by services)
01:48:33*chemist69_ joined #nim
02:02:10*bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
02:36:00*dexterk_ quit (Quit: Konversation terminated!)
02:38:59*jordo2323 joined #nim
02:43:30*bjz joined #nim
02:43:36*Nobabs27 joined #nim
02:43:56*jordo2323 quit (Quit: leaving)
02:49:16*babs_ joined #nim
02:49:42*bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
02:51:42*Nobabs27 quit (Ping timeout: 258 seconds)
03:10:50ftsfo/
03:14:44libmanhttps://www.reddit.com/r/nim/comments/65lm6h/any_ideas_on_how_to_improve_performance_further/
03:15:19*babs__ joined #nim
03:17:46*babs_ quit (Ping timeout: 258 seconds)
03:20:33*bjz joined #nim
03:23:07*bjz quit (Client Quit)
03:23:44*bjz joined #nim
03:37:18*yglukhov joined #nim
03:41:53*yglukhov quit (Ping timeout: 260 seconds)
03:52:54*dexterk quit (Quit: hAvE yOu mOOEd tOdAY)
03:52:54*bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
03:58:39*bjz joined #nim
04:08:12*babs__ quit (Quit: Leaving)
04:08:30*babs__ joined #nim
04:12:38FromGitter<Varriount> libman: How many cores does your machine have?
04:12:57libman4
04:14:22FromGitter<Varriount> libman: Then that's why only 4 threads are spawned. `spawn` creates a threadpool
04:16:00libmanYeah, I think the program the reddit u/SiD3W4y asking about expects being able to start arbitrary number of threads.
04:16:31FromGitter<Varriount> https://nim-lang.org/docs/threadpool.html
04:16:32libmans/ask/is ask/
04:16:49FromGitter<Varriount> Although, the threadpool is global for an entire program.
04:17:26FromGitter<Varriount> Part of me wonders if the API would have been better if the threadpool was an actual non-global object, but w/e
04:33:48*babs_ joined #nim
04:36:11*babs__ quit (Ping timeout: 240 seconds)
04:37:05*babs_ quit (Client Quit)
04:37:22*babs_ joined #nim
04:37:59*Snircle quit (Quit: Textual IRC Client: www.textualapp.com)
04:39:24*babs__ joined #nim
04:41:53*bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
04:42:08*babs__ is now known as Nobabs27
04:42:39*babs_ quit (Ping timeout: 260 seconds)
04:44:02*Nobabs27 quit (Client Quit)
04:44:18*Nobabs27 joined #nim
05:02:12*bjz joined #nim
05:03:49*Nobabs25 joined #nim
05:06:15*Nobabs27 quit (Ping timeout: 258 seconds)
05:38:01*Nobabs25 quit (Quit: Leaving)
05:39:48*yglukhov joined #nim
05:44:12*yglukhov quit (Ping timeout: 258 seconds)
05:56:17*chemist69_ quit (Ping timeout: 260 seconds)
05:58:26*chemist69 joined #nim
06:07:57*Vladar joined #nim
06:25:39*xmonader3 joined #nim
06:36:09*rokups joined #nim
06:37:26*nsf joined #nim
06:41:02*Arrrr joined #nim
06:45:50*Arrrr quit (Ping timeout: 252 seconds)
07:16:42*yglukhov joined #nim
07:21:07*yglukhov quit (Ping timeout: 240 seconds)
07:29:54*Arrrr joined #nim
07:29:54*Arrrr quit (Changing host)
07:29:54*Arrrr joined #nim
07:31:06*libman quit (Quit: Connection closed for inactivity)
07:36:03*xmonader2 joined #nim
07:39:12*xmonader3 quit (Ping timeout: 258 seconds)
08:02:13*chemist69 quit (Ping timeout: 240 seconds)
08:07:14*chemist69 joined #nim
08:32:16FromGitter<mratsim> Design question: for my multidimensional array library I want to save memory by shallow copying by default and only deep copying when there is a modification. ⏎ How to best detect if a copy is needed ? Refcounting/copy-on-write ?
08:32:52ldleworkIs there a way to reflect and get the arguments of an arbitrary proc or method?
08:33:51ftsfhttps://gist.github.com/ftsf/133a6d6797d7bd748b9e5d00b9d2ab1b trying to make a simple object pool (free list), any idea why I can't cast a var T to ptr T in getNext ?
08:34:09ldleworkheh I was just about to sit down and write an object pool
08:41:18ArrrrTry with addr(item)
08:41:40ftsfhmm but I don't want the addr of item, i'm using item to store the addr
08:45:24ftsfi wish it told me _why_ it can't be cast =p
08:45:49FromGitter<mratsim> item should be a ByteAddress no?
08:46:47ftsfitem is of type T (generally an object), but is also used to store a pointer to another T (next free slot) when it's empty.
08:47:05FromGitter<mratsim> I use this snippet stolen from Jehan for pointer arithmetics if it helps: https://github.com/mratsim/Arraymancer/blob/master/src/utils/pointer_arithmetic.nim
08:48:18FromGitter<mratsim> for getNext you can just add sizeof(type p[])
09:02:35*yglukhov joined #nim
09:06:50*yglukhov quit (Ping timeout: 255 seconds)
09:13:11*Arrrr quit (Quit: Leaving.)
09:19:50*shashlick quit (Ping timeout: 252 seconds)
09:20:19*shashlick joined #nim
09:22:58*yglukhov joined #nim
09:31:04*vivus joined #nim
09:39:56ftsf\
09:40:05ftsf\o/ got it working ,but it's uglier than i'd hoped =(
09:40:21FromGitter<mratsim> how?
09:40:57ftsfusing copyMem to setNext
09:42:10ftsfhttps://gist.github.com/ftsf/133a6d6797d7bd748b9e5d00b9d2ab1b
09:43:39*krux02 joined #nim
09:47:12FromGitter<mratsim> I think comments from others might be better, not famliar with object pool, I will probably hit the same issues as you on my lib though: How to iterate through non-contiguous arrays
09:50:28ftsfone thing that was odd (but understandable) was when assigning to a target larger than the source, the target had data changed (to nonzero values) in the portion higher than the source size
09:50:37ftsfhence having to use copymem
09:54:32*krux02 quit (Quit: Verlassend)
10:07:44*Sembei joined #nim
10:10:07FromGitter<gogolxdong> @dom96 where does the compiler define`#define NIM_INTBITS 32`
10:10:35ftsf /usr/include/nimbase.h ?
10:10:56ftsfnope sorry
10:10:58ftsfignore me
10:16:53*zachcarter joined #nim
10:17:18FromGitter<gogolxdong> there is no specific target platform but the compiled c file has that macro defination
10:18:23FromGitter<gogolxdong> is it possible to build nimkernel on arm_64 with gcc and as or nasm
10:18:34FromGitter<gogolxdong> sorry,amd64
10:20:11dom96My guess would also be nimbase.h
10:20:23dom96I'm guessing nimkernel would need some modifications to support amd64
10:24:16FromGitter<gogolxdong> I looked into nimbase.h that NI is dependent to the defination of c macro
10:28:39FromGitter<gogolxdong> which defines `#define NIM_INTBITS X`
10:30:27*nhywyll joined #nim
10:31:36*nhywyll quit (Client Quit)
10:32:16*rokups quit (Quit: Connection closed for inactivity)
10:36:09*Ven joined #nim
10:36:32*Ven is now known as Guest88951
10:37:25FromGitter<gogolxdong> execution of `nim c nakefile.nim`makes main.c in nimcache overwritten by `#define NIM_INTBITS 32`
10:44:11*bjz_ joined #nim
10:46:39*bjz quit (Ping timeout: 258 seconds)
10:55:11FromGitter<stisa> @gogolxdong It's defined in the c generation I think, here https://github.com/nim-lang/Nim/blob/7e351fc7fa96b4d560c5a51118bab22abb590585/compiler/cgen.nim#L855
10:58:59dom96In that case, you probably control it via the --cpu flag. For example: --cpu:amd64
11:01:39FromGitter<gogolxdong> I will try :)
11:02:49*Snircle joined #nim
11:12:51FromGitter<zacharycarter> @Varriount thanks for that link
11:15:21zachcarter@Varriount think I just found a new track for space invaders on there :P
11:25:35FromGitter<gogolxdong> doesn't work
11:27:02*Trustable joined #nim
11:33:27*Guest88951 quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
11:37:29dom96huh, apparently Amazon has used copies of my book already. That sure is impossible: https://www.amazon.co.uk/Nim-Action-Dominik-Picheta/dp/1617293431/ref=sr_1_1?ie=UTF8&qid=1479663850&sr=8-1&keywords=nim+in+action
11:38:34FromGitter<mratsim> 88 pounds, wow :O
11:39:03*alectic joined #nim
11:40:12dom96Either it's a scam or a mistake, in any case don't buy it :)
11:40:25dom96(From Amazon at least for now)
11:42:31alecticHi. I just registered yesterday on the forum but haven't received any confirmation mail at all. Is there anything to be done, like resend the mail or such (haven't seen the option anywhere)? I can't login without it being confirmed.
11:43:28dom96alectic: hey, maybe your email provider decided it was spam. I can activate the account for you, what's your nickname?
11:43:29FromGitter<mratsim> Seems like you were hit by my curse :P. “Cast @dom96"
11:44:15alecticdom96: I use gmail I don't think that's the issue. My nickname is alexdreptu
11:44:30dom96alectic: then perhaps nimforum is just failing :)
11:44:47dom96Activated
11:44:56alecticdom96: I wouldn't know but I thought to mention something on IRC
11:45:00alecticdom96: thank you
11:56:22FromGitter<Bennyelg> Anyone can help?
11:56:27FromGitter<Bennyelg> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=58f4ad6e8fcce56b200d2fca]
11:56:37FromGitter<Bennyelg> What is wrong ? :(
12:01:18FromGitter<RSDuck> I think you have to do it this way: ⏎ ⏎ ```var c = new(cursor) ⏎ c.dbConnection = dbCon ⏎ return c.getCursor()``` [https://gitter.im/nim-lang/Nim?at=58f4ae908bb56c2d11b64b5a]
12:01:50FromGitter<Bennyelg> Great thanks.
12:02:31FromGitter<RSDuck> BTW I you don't plan to modify c you should make it a let variable
12:02:50FromGitter<Bennyelg> Yea I thought about it now :D
12:02:59*alectic quit (Quit: because gitter)
12:03:41FromGitter<RSDuck> for easier a construction of these ref objects I usally create a proc like this:
12:04:34FromGitter<RSDuck> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=58f4af55d32c6f2f09133c46]
12:06:43*peted quit (Quit: WeeChat 1.4)
12:07:00*peted joined #nim
12:07:12zachcarterinvaders are coming! http://imgur.com/a/FXTyh
12:13:02zachcarterdo they look too tightly packed? http://imgur.com/a/gV7Cq
12:13:30zachcarterI think so.. one laser shot as it stands could take out multiple invaders
12:14:39zachcarterhttp://imgur.com/a/P0vTR looks better
12:14:49FromGitter<gogolxdong> in any other cases it is compiled with `#define NIM_INTBITS 64`
12:15:11FromGitter<mratsim> You have some margin before reaching touhou level https://theskb.files.wordpress.com/2015/12/touhou.gif
12:16:06FromGitter<RSDuck> you could also position them in other formations(like a circle or other geometric bodies)
12:20:18FromGitter<gogolxdong> I'm not sure what happened
12:48:37*zachcarter quit (Read error: Connection reset by peer)
12:48:59*zachcarter joined #nim
12:57:34*vivus quit (Quit: Leaving)
13:01:37FromGitter<Varriount> Zachcarter: Generally the vertical and horizontal spacing is the same.
13:02:31FromGitter<Varriount> https://screenshots.en.sftcdn.net/en/scrn/25000/25176/chicken-invaders-1.jpg
13:06:03FromGitter<Varriount> @mratsim Could to explain your shadow copying question again?
13:06:56FromGitter<Varriount> *shallow
13:07:34FromGitter<mratsim> This is my data structure ⏎ ⏎ ```code paste, see link``` ⏎ ⏎ data is potentially a huge 300Mo to a few GB seq [https://gitter.im/nim-lang/Nim?at=58f4be198e4b63533dd81c7f]
13:07:37federico3Being able to generate parser directly from ABNF would be really nice - http://marcelog.github.io/articles/abnf_grammars_in_elixir.html
13:08:41FromGitter<mratsim> It might be quite costly to copy this data structure when just refering to “data” is enough
13:09:21FromGitter<Varriount> Why is it an object? Make it a ref
13:10:51FromGitter<mratsim> mmmh, let me think
13:11:14FromGitter<Varriount> You aren't receiving any benefits from using an object type - sequences store their data in a separate memory chunk anyway
13:11:53FromGitter<mratsim> that’s why I used an object type, since it’s stored somewhere else anyway, why introduce another inderirection layer
13:13:06FromGitter<Varriount> Because an object type is copied on assignment. Any members that are strings or sequences have their days copied too
13:13:18FromGitter<Varriount> *data
13:14:48FromGitter<Varriount> Unless marked as shallow or wrapped in a reference type, strings and sequences can only ever be referenced in one location.
13:16:08FromGitter<mratsim> this is what I want for all the object fields, except for data. I’m still thinking over if it’s best to deep copy and expect the compiler/GC to free/reuse the memory when the original “Tensor” is not used
13:16:49*libman joined #nim
13:17:34FromGitter<mratsim> or if I should manage memory assignment/copy/etc manually in my library.
13:17:43FromGitter<Varriount> You *want* nearly everything in that object to be copied on assignment?
13:18:19FromGitter<Varriount> I mean, I guess uses could still wrap it in a ref
13:18:33FromGitter<Varriount> On their own
13:19:32FromGitter<mratsim> yes. concrete example: fr a 3 by 4 matrix, dimensions will hold @[4, 3], strides will hold [4, 1], offset will just be a pointer to data[0] and data will hold the actual data of the matrix
13:20:37FromGitter<mratsim> now if I transpose it. dimensions is @[3, 4], strides becomes @[1,4], data can stay the same. It’s just a different view of the same data
13:21:38FromGitter<Varriount> @mratsim If you keep it an object type, then only way to prevent the data attribute from being copied is to either mark it as shallow (which means data can't be modified) or make it a `ref seq`
13:21:53FromGitter<mratsim> now I want to update the original matrix at coordinate 0, 0: ⏎ ⏎ 1) if it was deep copied for transposition. no worries ⏎ 2) if it was shallow copied. there comes the issues. [https://gitter.im/nim-lang/Nim?at=58f4c174a0e485624211452b]
13:23:33FromGitter<Varriount> @mratsim I think you might be a bit confused.
13:23:38FromGitter<mratsim> So I’m evaluating the difficulty of using ref/shallow copy to have the best memory usage vs deep copying and trusting Him GC/compiler
13:23:52FromGitter<mratsim> probably :/ spend too much time thinking on it
13:24:06FromGitter<Varriount> Deep copying only occurs in special cases, usually involving threading.
13:24:38FromGitter<Varriount> Deep copying means copying the entire object, creating copies of reference types, etc
13:25:00FromGitter<Varriount> The entire tree, references and all, is duplicated.
13:25:14*yglukhov quit (Remote host closed the connection)
13:25:25FromGitter<Varriount> This is in contrast to default assignment/copying semantics
13:26:13FromGitter<Varriount> For reference types, only the reference itself (the pointer) is usually copied. The data the reference points to is not usually copied.
13:27:02FromGitter<mratsim> But just to make sure, currently my object wraps sequences. So if it is copied, every seq is copied, deeply so right ?
13:27:11FromGitter<Varriount> Yes.
13:27:21FromGitter<Varriount> Not deeply
13:27:52FromGitter<Varriount> If the sequences contain references, only the references themselves, not the data they point to, will be copied.
13:28:26FromGitter<mratsim> ok.
13:29:44FromGitter<Varriount> Will 'data' be modified multiple after the object has been created?
13:30:30FromGitter<mratsim> Some calculations like transpose can be done just by modifying dimensions and strides.
13:31:21FromGitter<mratsim> but data can be modified later on matrix A and transpose(A) independently
13:31:40FromGitter<Varriount> Then `data` will probably need to turn into a `ref seq`
13:32:21FromGitter<Varriount> If you don't want the sequence data to be copied along with the others.
13:32:51FromGitter<Varriount> Will the modifications change the length of `data`?
13:33:00FromGitter<mratsim> no they won't
13:33:31FromGitter<mratsim> but it can be all elements in data are multiplied by 2 for example
13:34:03FromGitter<Varriount> Hm, then maybe you can get away with just marking it as shallow()
13:35:55FromGitter<Varriount> @mratsim https://forum.nim-lang.org/t/2665#16491
13:36:10FromGitter<mratsim> That’s what I was thinking, or just use {.shallow.} pragma.
13:38:38FromGitter<Varriount> You just need to make sure any sequence marked as shallow never has its size changed, otherwise weird things happen.
13:39:24FromGitter<mratsim> and then I guess, every time a proc is modifying the underlying data, I have to create a new object if there are more than 1 ref to it
13:41:11FromGitter<Varriount> Huh?
13:42:18FromGitter<mratsim> if I have A and B, referring to the same underlying data. I want to do 2*B, but not change A.
13:43:11FromGitter<Varriount> Yes.
13:43:50FromGitter<Varriount> But again, why not do that with all the sequence members?
13:44:30FromGitter<Varriount> @zacharycarter Did you pick any track in particular?
13:45:43FromGitter<mratsim> most assignements will need to modify dimensions and strides, and not data
13:46:14FromGitter<mratsim> is there a way to ask Nim GC if an object has the only reference to a ref seq ?
13:47:36dom96federico3: write a Nimble package for it :)
13:50:15FromGitter<Varriount> @mratsim Unless a sequence is shallow, there will only ever be one reference to it.
13:50:43FromGitter<Varriount> @mratsim Remember, a sequence is backed by an array allocated from heap memory.
13:51:13FromGitter<Varriount> If the sequence has to change size, memory may need to be reallocated, and the pointer to the heap data will change.
13:54:02FromGitter<mratsim> I understand that. But if I use a ref object or a ref seq, I have to keep track of when the data can be modified directly (only one ref) or when I have to copy it (more than one ref to it)
13:55:24FromGitter<mratsim> Basically even if 2 arrays share the same underlying data for memory efficiency, as soon as there is a modification, they must behave as independant entities
13:56:44FromGitter<Varriount> Yes.
13:56:45FromGitter<mratsim> So either I pay the cost upfront by always copying on assignment. Advantage: it’s easier to reason about, disadvantage: I may use MB to GB more memory than necessary (if there is no subsequent “write”.
13:57:45FromGitter<Varriount> @mratsim By the way, are you sure these sequences shouldn't be arrays?
13:58:00FromGitter<mratsim> Or I defer the cost by only copying when writing. Advantage: memory is only allocated as needed. Disadvantage: every time I want to do modification I need to be very careful and track how many references I have to the underlying data (or better ask the GC to do it for me)
13:58:13FromGitter<Varriount> @mratsim Yes.
13:58:36FromGitter<Varriount> @mratsim Or you can always copy on write.
13:59:15FromGitter<mratsim> Ideally they should, or unchecked arrays, but I don’t want to have a static[int] in the type. I would get the same issues as when I was using andrea linalg’s library
13:59:18FromGitter<Varriount> The GC can give you a lower bound on the number of references for something, but not an upper bound.
14:00:08FromGitter<Varriount> @mratsim You might want to have dimensions and strides stored in one sequence then, to minimize small allocations
14:01:12FromGitter<mratsim> that won’t be pratical unfortunately. I would then need another variable to keep track of the dimensions of my ndarray
14:02:13*chemist69 quit (Ping timeout: 240 seconds)
14:03:47FromGitter<mratsim> let’s say ou want to access location Matrix[i,j]. It’s at stride[0]*i + stride[1] * j + offset
14:06:07FromGitter<mratsim> Thanks @Varriount. I will first continue up to a point I can use the library for real world scenarios (with the current seq deep assignment semantics). I will then benchmark the difference if I do shallow copy.
14:07:12*chemist69 joined #nim
14:08:05FromGitter<Varriount> I'd say that you should focus on implementation, then try optimization.
14:12:22*zachcarter quit (Read error: Connection reset by peer)
14:12:43*zachcarter joined #nim
14:14:07FromGitter<mratsim> Actually the basic functionality I need is done (besides GPU support). The last thing was how to implement matrix transposition, which can be done 3 ways: ⏎ ⏎ 1) keeping the ref to the same data, and exchanging dimensions and strides (lowest cost in memory space, no cost in reordering data) ⏎ 2) copying the data, and exchanging dimensions and strides. (high cost memory space, no cost in reordering data ) ⏎ 3) copyi
14:14:07FromGitter... reordering the actual data (high cost in memory space, high cost in reordering the data) ... [https://gitter.im/nim-lang/Nim?at=58f4cdb28e4b63533dd86313]
14:21:59*devted joined #nim
14:28:11*gokr joined #nim
14:30:41FromGitter<Varriount> Hm. It depends on the data size your supporting
14:31:37*Vladar quit (Remote host closed the connection)
14:33:08FromGitter<mratsim> Most useful shape for neural networks is 224x224x3 (for RGB) of float32 (4bytes)
14:33:16*Tiberium joined #nim
14:33:55*bjz_ quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
14:34:30FromGitter<mratsim> Ideally I would like to store at least 16 to 256 of those in a Tensor
14:38:57*arnetheduck quit (Ping timeout: 258 seconds)
14:40:39FromGitter<Varriount> Number one sounds best then.
14:42:24FromGitter<mratsim> And always Copy-on-Write to avoid keeping track of refcount I guess
14:42:34FromGitter<Varriount> Weekday would the length of the `dimensions` and `strides` members be for that kind of Tensor?
14:42:47FromGitter<Varriount> *What would
14:43:29FromGitter<mratsim> dimensions = (256, 224, 224, 3)
14:44:15FromGitter<mratsim> strides = (150 528,672,3,1)
14:45:31FromGitter<Varriount> Ok, so they wouldn't be nearly as long as the data sequence
14:45:40FromGitter<mratsim> if 1D Vector: length is 1, if 2d matrices, length is 2, if 3D Tensor, length is 3, if 4D like my example length is 4
14:46:19FromGitter<mratsim> 5D can happen for videos (5th dimension will be time) or 3D images
14:46:51FromGitter<mratsim> 6D may happen in some years (3D videos)
14:47:09FromGitter<mratsim> 7D will not happen
14:47:55FromGitter<mratsim> so dimensions and strides’ length are always inferior to 7
14:50:27*Jesin joined #nim
14:51:38*freevryheid joined #nim
14:51:51*freevryheid is now known as fvs
15:12:42*Arrrr joined #nim
15:12:42*Arrrr quit (Changing host)
15:12:42*Arrrr joined #nim
15:26:11*yglukhov joined #nim
15:33:29*Nobabs27 joined #nim
15:41:26*vlad1777d joined #nim
15:42:50*Vladar joined #nim
15:48:39*Nobabs25 joined #nim
15:51:26*Nobabs27 quit (Ping timeout: 252 seconds)
15:51:50shashlickdoes Nim use stdcall convention on Windows by default for all DLLs calls?
15:52:51*Tiberium quit (Remote host closed the connection)
15:53:05*Tiberium joined #nim
15:57:59*filcuc joined #nim
15:58:07*filcuc quit (Client Quit)
16:06:25FromGitter<gogolxdong> binutils work well
16:07:53FromGitter<gogolxdong> it's inspiring
16:12:40*Nobabs227 joined #nim
16:13:29*nsf quit (Quit: WeeChat 1.7)
16:14:25*sz0 joined #nim
16:14:36FromGitter<gogolxdong> I can run nimkernel on amd64 platform now :)
16:15:05*Nobabs25 quit (Ping timeout: 260 seconds)
16:15:27FromGitter<gogolxdong> be enlightened all of a sudden
16:15:29*Nobabs27 joined #nim
16:17:32*Nobabs227 quit (Ping timeout: 240 seconds)
16:19:40*Nobabs25 joined #nim
16:22:05*Nobabs27 quit (Ping timeout: 260 seconds)
16:30:49*Arrrr quit (Quit: Leaving.)
16:31:52FromGitter<Bennyelg> Hey Guys does anyone knows whos the nim book aims t o
16:37:40*Nobabs227 joined #nim
16:40:23*Nobabs25 quit (Ping timeout: 252 seconds)
16:41:53*fastrom quit (Quit: Leaving.)
16:44:48*fastrom joined #nim
16:55:52TiberiumBennyelg: you mean target audience?
16:56:39*Nobabs25 joined #nim
16:56:42Tiberiumthere's first chapter for free
16:57:31Tiberium"This book is by no means a beginner’s book. k. It assumes that you have knowledge of at least one other programming language and that you have experience writing software in it. " from free chapter (1.4 )
16:59:07*fastrom quit (Quit: Leaving.)
16:59:23*Nobabs227 quit (Ping timeout: 260 seconds)
17:03:43*fastrom joined #nim
17:10:15*yglukhov quit (Remote host closed the connection)
17:11:07*gokr quit (Ping timeout: 240 seconds)
17:11:30dom96shashlick: you need to specify it via a pragma
17:11:32dom96IIRC
17:11:41*Nobabs227 joined #nim
17:14:15*Nobabs25 quit (Ping timeout: 268 seconds)
17:16:02*fastrom quit (Quit: Leaving.)
17:17:09*fastrom joined #nim
17:17:58Tiberiumgogolxdong can you please upload it to github?
17:18:00Tiberiummaybe as a fork
17:18:05TiberiumI want to play with it too :)
17:18:47FromGitter<TiberiumPY> @gogolxdong please :)
17:22:40*Nobabs25 joined #nim
17:25:10*Nobabs227 quit (Ping timeout: 240 seconds)
17:26:33*zachcarter quit (Quit: zachcarter)
17:27:05*fastrom quit (Quit: Leaving.)
17:28:33ldleworkIs there a way to reflect the parameters of an arbitrary function or method?
17:28:39*Nobabs227 joined #nim
17:31:28*Nobabs25 quit (Ping timeout: 260 seconds)
17:31:40*Nobabs25 joined #nim
17:33:54*Nobabs227 quit (Ping timeout: 240 seconds)
17:36:25*Nobabs25 quit (Client Quit)
17:36:36shashlickdom96: thanks
17:39:00shashlickdom96: how do you do it? I get an error for => proc BASS_ChannelGetData*(handle: DWORD, buffer: pointer, length: DWORD): DWORD {.stdcall, .importc, dynlib: basslib.}
17:39:27demi-the importc shouldn't have a leading `.`
17:39:58Tiberiumwait, we're not the first to do nim cross-compilation via mingw
17:40:14Tiberiumhttps://github.com/vegansk/nimtests/blob/b179e9f1ecf63cd7dcd21b8f64dab3559f0e7b6a/basic/crosscompile/nim.cfg
17:42:41shashlickdemi-: thanks! that worked
17:44:15shashlickin fact, changing ChannelGetData() to stdcall got rid of the bass.dll crash I was seeing (Varriount, dom96)
17:44:26shashlicklife's good :)
17:44:29*Trustable quit (Remote host closed the connection)
17:45:32demi-shashlick: the `{.` and `.}` enclose the pragmas, you just comma separate them within
17:45:55demi-the leading or trailing dots aren't significant to the pragmas themselves
17:47:42shashlickdemi-: that helps
17:48:07shashlickhow is stdcall different from the default calling convention that Nim uses?
17:51:33FromGitter<stisa> shashlick : not sure about the specifics, but there's some explanation here https://nim-lang.org/docs/manual.html#types-procedural-type
17:52:07ldleworkshashlick: whathca makin
17:52:48*Nobabs27 joined #nim
17:54:18shashlickldlework: I'm using Nim to read audio input (using BASS.dll) and intend doing some realtime analysis
17:54:38ldleworkshashlick: ah cool. I thought maybe you were making some music production stuff.
17:55:48shashlicktwo ideas - one is beat detection which seems ambitious given i'm new to audio analysis, the other which I'm planning on starting with, is to identify the guitar chord i'm playing (most dominant low frequency) and play back a bass note that corresponds to it
17:58:04*rauss joined #nim
17:59:01shashlickbasically automating my bass guitar guy
17:59:24shashlickonly feasible now cause Nim is fast enough to do this, project was too ambitious for Python
18:02:51*bungoman_ joined #nim
18:06:31*Nobabs25 joined #nim
18:06:33*bungoman quit (Ping timeout: 240 seconds)
18:09:23*Nobabs27 quit (Ping timeout: 260 seconds)
18:10:02*fastrom joined #nim
18:24:27*sz0 quit (Quit: Connection closed for inactivity)
18:28:32*devted quit (Ping timeout: 240 seconds)
18:35:51*fastrom quit (Quit: Leaving.)
18:36:55*Araq joined #nim
18:37:01dom96ahh cool. Yeah, the calling conventions are a real gotcha
18:41:57FromGitter<Varriount> Which is why C2Nim helps. It needs some love though.
18:42:24dom96Would c2nim help in this situation?
18:43:23FromGitter<Varriount> It can detect calling conventions for regular function prototypes.
18:43:29*Nobabs227 joined #nim
18:43:38FromGitter<Varriount> I don't know about function typedefs though.
18:43:45ldleworkAnyone bored and wanna do some graphics programming?!
18:46:12*Nobabs25 quit (Ping timeout: 258 seconds)
18:51:06*libman quit (Quit: Connection closed for inactivity)
18:53:00*xmonader2 is now known as xmonader
18:59:32*gokr joined #nim
19:05:07*gokr quit (Ping timeout: 240 seconds)
19:09:28*Nobabs25 joined #nim
19:11:50*Nobabs227 quit (Ping timeout: 245 seconds)
19:12:09*fastrom joined #nim
19:13:47fvsgenerics question, say I have a function proc ln10[T](x:T):T = ln(10.0), how can I make the 10.0 a float32 if x is float32? - I presume 10.0 is float64, right?
19:14:11FromGitter<mratsim> 1) T
19:14:32*Nobabs227 joined #nim
19:14:56fvs10.0 is T?
19:15:01FromGitter<mratsim> no
19:15:09FromGitter<mratsim> instead of 10.0 use 10.T
19:15:26FromGitter<mratsim> if T is int it will do 10.int, if T is float64 it will do 10.float64
19:15:38fvsthanks!
19:16:18FromGitter<mratsim> you should use let ln10: T = ln(10.T) if it’s inside a proc
19:16:54*Nobabs25 quit (Ping timeout: 240 seconds)
19:19:44*chemist69 quit (Ping timeout: 255 seconds)
19:22:22*chemist69 joined #nim
19:34:56FromGitter<Varriount> You can also do T(10.0)
19:35:16FromGitter<Varriount> I prefer that style.
19:35:27*fastrom quit (Quit: Leaving.)
19:37:29*Nobabs25 joined #nim
19:40:02*Nobabs227 quit (Ping timeout: 240 seconds)
19:41:28*Nobabs227 joined #nim
19:44:28*Nobabs25 quit (Ping timeout: 260 seconds)
19:50:30FromGitter<mratsim> I stumble upon that kind of conversion completely randomly, I don’t remember seeing that in the docs. Is it mentionned somewhere?
19:50:37FromGitter<mratsim> stumbled*
20:00:53fvsI like the 0.T but T(0.5) is required as 0.5T fails I guess.
20:04:58fvsgot me thinking - is it at all possible to define a generic const,say const ι = [0.T, 1.T]
20:07:28*Nobabs25 joined #nim
20:07:32*nsf joined #nim
20:08:24*fastrom joined #nim
20:08:42*Tiberium quit (Remote host closed the connection)
20:09:56*Nobabs227 quit (Ping timeout: 252 seconds)
20:10:32*Nobabs227 joined #nim
20:13:33*Nobabs25 quit (Ping timeout: 260 seconds)
20:14:30*Nobabs25 joined #nim
20:16:53*Nobabs227 quit (Ping timeout: 255 seconds)
20:17:32*Nobabs227 joined #nim
20:20:10*zachcarter joined #nim
20:20:23*Nobabs25 quit (Ping timeout: 252 seconds)
20:22:30*Nobabs25 joined #nim
20:24:59*Nobabs227 quit (Ping timeout: 255 seconds)
20:27:02*fastrom quit (Quit: Leaving.)
20:35:01*Trustable joined #nim
20:38:32*Nobabs227 joined #nim
20:41:12*Nobabs25 quit (Ping timeout: 258 seconds)
20:49:31*Nobabs25 joined #nim
20:50:13*nsf quit (Quit: WeeChat 1.7)
20:52:17*Nobabs227 quit (Ping timeout: 252 seconds)
20:58:28*Trustable quit (Remote host closed the connection)
21:00:28*Nobabs227 joined #nim
21:03:03*Nobabs25 quit (Ping timeout: 258 seconds)
21:04:31*Nobabs25 joined #nim
21:07:18*Nobabs227 quit (Ping timeout: 260 seconds)
21:07:26FromGitter<Varriount> Benchmarks are hard: https://forum.nim-lang.org/t/2917
21:08:30*fvs left #nim ("leaving")
21:10:01*Nobabs227 joined #nim
21:12:41*Nobabs25 quit (Ping timeout: 255 seconds)
21:14:56*Nobabs25 joined #nim
21:17:29*Nobabs227 quit (Ping timeout: 260 seconds)
21:17:30*bjz joined #nim
21:18:27FromGitter<Varriount> @mratsim https://nim-lang.org/docs/manual.html#statements-and-expressions-type-conversions
21:18:59FromGitter<Varriount> The manual knows all! (That araq puts in it)
21:24:07*krux02 joined #nim
21:25:21krux02good evening
21:26:23ldleworkhi krux02
21:26:27ldleworkhow are ya
21:26:38*rauss quit (Quit: WeeChat 1.7)
21:26:46krux02hi, I'm fine
21:26:54ldleworkgood to hear
21:27:39krux02did you find out the problem why the OpenGl context could not be created?
21:28:33ldleworkNo
21:28:55ldleworkkrux02: I've been trying to convince federico3 to bind sdl-gpu
21:29:25ldleworkWhich looks very simple. A simple sprite API that does ordered batchin behind the scenes (like sfml) and also a simple shader API
21:29:40krux02I really tried to be minimalistic in what I used for that library and now it turns out that this did not give me very wide cross platform support
21:30:15ldleworkkrux02: I can understand your frustration
21:30:25ldleworkkrux02: it may just be a situation with optimus though
21:30:41ldleworkhowever, I do not have trouble with other opengl applications
21:30:57ldleworkfor example, pydsigner and I both experience broken vsync
21:31:03ldleworkeven when vsync is on, we experience tearing
21:31:07pydsignerHiya krux02
21:31:16ldleworkso we probably just have a shit stack that you shouldn't worry too much about
21:31:22krux02yea, but even if you don't use the nvidia driver at all, you should be able to run the demos, because the Intel drivers can do everything necessary
21:31:25krux02I looked that up
21:31:34ldleworkSure but how to force their use?
21:31:41ldleworkIts confusing topic for me.
21:32:11krux02well when I used that I had to start programs with optirun
21:32:35krux02was a bit problematic is some cases when the program I tried to run had it's own startup script
21:32:35ldleworkpydsigner: were you able to get any of his demos working even with optirun?
21:32:46ldleworkyou said you just got a black screen?
21:32:54pydsignerYeah
21:33:01pydsignerThat was without Nvidia
21:33:18krux02by the way I have intel on this computer without Nvidia
21:33:24krux02and I use software rendering
21:33:42ldleworkI mean taking on the complexity and only getting software rendering...
21:33:50krux02you should not need software rendering, but my demos are software rendering compatible even performance wise
21:33:54ldleworkkrux02: have you seen sdl-gpu
21:33:55krux02everything is super efficiest
21:33:59krux02nope
21:34:14ldleworkhttps://github.com/grimfang4/sdl-gpu
21:36:03*Nobabs227 joined #nim
21:36:35ldleworkI just built it and ran its demos
21:36:37ldleworkit works
21:36:38*gokr joined #nim
21:36:41krux02ok I got that googled
21:37:33ldleworkfederico3: ^
21:37:36krux02I did not look too much into details, but as far as I understand it, it tries to provide an immediate mode API, but under the hood it does some sprite gathering and then does bulk operations
21:37:45ldleworkyes
21:37:48ldleworkjust like sfml
21:37:54ldleworkbasically what I hoped you'd build for me
21:37:58krux02yea I did not use sfml yet
21:38:03ldleworkso I didn't have to deal with opengl myself
21:38:25ldleworkSo now we just need a nim binding :)
21:38:44krux02I just read things about it and then didn't see a particular reason why it should be better than just sdl2
21:38:48*Nobabs25 quit (Ping timeout: 268 seconds)
21:38:54ldleworkSDL2 does no optimization
21:39:06ldleworkit uses opengl textures but does not use them efficiently
21:39:17ldleworkyou'd have to use raw opengl to build that yourself and not use the SDL2 surfaces
21:39:35ldleworkSo its faster than sdl1, but no where close to sfml
21:39:35krux02well it is hard to say what usage is efficiert and what usage is not efficiert
21:39:43ldleworkeh?
21:39:59ldleworkI'm saying SDL2 does no logic on the backend. It simply uses opengl backed textures.
21:40:08krux02for gpu sdl or sfml you still need to use the immediate mode correctly in order for it to be able to optimize it
21:40:29krux02I think you should use it, if it provides for you the path of least resistence
21:40:31ldleworkNot with sdl2, it doesn't support doing any optimization at all
21:40:46zachcarterooo gpu talk
21:41:10krux02I want to provide a solution that is the path of least resistence for a lot of people, but when it is clearly not the path of least resistence I definitively need to put some work into it.
21:41:16zachcarterbtw I looked at binding sdl-gpu and gave up when I knew very little about binding, I was going to write bindings for it for an article I was writing but never got around to it
21:41:33ldleworkIt looks very useful.
21:41:47krux02well you can write a binding for it
21:41:48ldleworkCompared to say BGFX which can introduce a lot of complexity into your implementation.
21:41:48FromGitter<Varriount> zachcarter: Did you pick a song?
21:42:02ldleworkkrux02: are you so sure? :)
21:42:03zachcarterbgfx is useful
21:42:14krux02as long as it has not too many macros and unbindable stuff, writing binidings is pretty straight forward
21:42:14zachcarterit does more than sdl gpu does, they’re two totally different libs
21:42:27zachcarterVarriount: not yet, I found a few I liked though
21:42:28ldleworkI mean they both implement ordered batching no?
21:42:32zachcarterno
21:42:37ldleworkPretty sure...
21:42:44zachcarterthey don't
21:42:50zachcarteryou still have to batch draw calls yourself with bgfx
21:42:53*xmonader quit (Ping timeout: 252 seconds)
21:42:57zachcartersdl gpu is higher level than bgfx
21:43:13krux02by the way I prefer manual batching in an API
21:43:14ldlework"bgfx is using sort-based draw call bucketing. This means that submission order doesn’t necessarily match the rendering order, but on the low-level they will be sorted and ordered correctly."
21:43:21krux02it is more honest about what is actually going on
21:43:23zachcarteryes that’s different from what sdl gpu does
21:43:29zachcarterthat’s how the internal renderer works
21:43:40zachcarterit doesn’t mean it does draw call batching like sdl2 gpu does
21:43:44ldleworkOK you are saying "they don't do that"
21:44:01krux02I looked at bgfx, too
21:44:07zachcarterthat’s right bgfx doesn’t batch your draw calls
21:44:16zachcarterlike sdl gpu does
21:44:19krux02it looks very intelligently designed
21:44:42krux02worth investing time
21:44:55ldlework"On the high level this allows more optimal way of submitting draw calls for all passes at one place, and on the low-level this allows better optimization of rendering order."
21:45:01ldleworkWhat is batching, if not exactly this?
21:45:50zachcarterfor instance
21:46:13zachcarteryou are drawing 12 vertices
21:46:22zachcarter2 sprites
21:46:34zachcarter6 indices
21:46:53ldlework?
21:46:55zachcartersdl gpu would submit them in one draw call
21:47:07zachcarterbgfx would not without you writing that abstraction
21:47:33zachcartersdl gpu is like a sprite batch
21:47:43zachcarteryou say start drawing
21:47:46zachcartermake your draw calls
21:47:48ldleworkWhat is it describing then in the above quotes?
21:47:49zachcartersay finish drawing
21:48:03zachcarterit’s describing the way the internal renderer works it uses submission bucket based rendering
21:48:22zachcarterwith bgfx you don’t worry about the order of your draw calls you just submit them and the underlying render buckets and orders them
21:48:31zachcarteryou use views for ordering things
21:48:41ldlework....
21:48:45zachcarterit’s a much lower level concept than what sdl2 gpu describes
21:48:54zachcarterI can’t be any clearer in my explanation
21:49:19zachcarterbecause I don’t know enough about bucket based draw sorting to explain this to you
21:49:28zachcarterI can link you to articles though if you’re interested in reading
21:49:32ldleworkI just read the one linked
21:49:40ldleworkIt sounds like you put draws into buckets manually
21:49:56ldleworkwere as in sdl-gpu, it automatically organizes them based on sprite state
21:50:11ldleworkIs that really drastically different enough to stage the distinction?
21:50:17zachcarterit’s very different
21:50:26ldleworkCan you expound the distinction?
21:50:28zachcarterworking with bgfx is akin to working with opengl
21:50:32zachcarteror direct x
21:50:35ldleworkI don't need analogies lol
21:50:38zachcarterso there’s no batching out of the box
21:50:49zachcarteryour’e still using index and vertex buffers
21:50:59zachcarterand if you want batching you ahve to impelent it yourself
21:51:12ldleworkBut not really though
21:51:15*Vladar quit (Quit: Leaving)
21:51:20ldleworkYou simply have to utilize the batching bgfx provides?
21:51:50zachcarterI’ve stated I don’t know how many times bgfx doesn’t do this for you
21:51:56ldleworkThe bucketing, which then results in the same kind of single-call-for-multiple-artifacts that sdl-gpu is doing automatically?
21:52:22ldleworkYou haven't clarified about what I'm asking though.
21:52:35ldleworkBGFX appears to offer you some kind of functionality allowing you to bucket your drawing right?
21:52:45ldleworkIt comes with that, you don't implement the bucketing that it advertises that it does do you?
21:52:57ldleworkSo the distinction is that you have to actively engage this mechanism, you don't have to built it from scratch.
21:53:01*Nobabs25 joined #nim
21:53:21zachcarterdraw call bucketing is different from sprite batching
21:53:47zachcarterhttp://realtimecollisiondetection.net/blog/?p=86
21:53:49ldleworkSprite batching is surely utilizing the same exact optimization no?
21:54:05zachcarterno
21:54:10ldleworkThat is, sorting artifacts into buckets as to unify their draw calls
21:54:22ldlework...OK then I have no idea what sprite batching is then :(
21:54:46zachcarterevery sprite is 6 vertices
21:54:51zachcarterforming 2 triangles
21:54:59zachcarterif you use indices 3 verts
21:55:00zachcarter6 indices
21:55:08zachcarterif you batch your draw calls
21:55:11ldleworkWhat about the texture?
21:55:21zachcartersure you can have tex coords and colors and all that too
21:55:32*Nobabs227 quit (Ping timeout: 240 seconds)
21:55:38ldleworkOK so you're saying since all sprites are exactly the same format, they can fit into a single bucket
21:55:38zachcarterbut the point is if you’r ebatching your draw calls
21:55:52zachcarteror into a buffer as OpenGL refers to them as
21:55:58ldleworkSo
21:56:04ldleworkIf I were using BGFX
21:56:11ldleworkI could use the bucketing that BGFX provides
21:56:19ldleworkto put all my sprite draws into a single draw call
21:56:21ldleworkyes or no?
21:56:23zachcarterno
21:56:44ldleworkbecause?
21:56:46zachcarterbgfx provides an api abstraction over opengl, direct x etc
21:56:51zachcarterbecause it works the same way as if you were using opengl
21:57:11ldleworkso when the readme says those above quotes
21:57:12zachcarterthe bucketing is how he is ordering around draw call data
21:57:22zachcarterif you look at the bgfx api
21:57:23ldleworkthat have absolutely no significance or relation to this coversation?
21:57:36ldleworkyou cannot actually interact with this bucketing feature?
21:57:40zachcarterno
21:57:46zachcarterit’s internal to the renderer
21:58:01ldleworkSo what exactly does the renderer sort and bucket?
21:58:04zachcarteryou make draw calls the same way you amke draw calls in direct x or open gl or whatever
21:58:06zachcarterdraw call data
21:58:07ldleworkIf you implement your own sprite batching
21:58:19ldleworkAnd you use BGFX's renderer API to implement that sprite batching
21:58:23zachcarteryes
21:58:33ldleworkwhat does BGFX's renderer do internally that reflects the advertisements above?
21:58:39ldleworkin addition to whatever you've already done?
21:58:42zachcarteryou’ dhave to dig into the renderer code
21:58:45ldleworklol
21:58:53ldleworkSo its possible you don't know?
21:59:09zachcarterI do know because I’ve implemented sprite batching with bgfx
21:59:13zachcarterI don’t know how OpenGL interacts with the hardware
21:59:21zachcarternot on a line by line basis
21:59:21ldleworkSure but what is BGFX doing on your behalf?
21:59:24zachcarterI understand the pipeline
21:59:58zachcarterbgfx is providing a single API that will call whatever underlying graphics API makes sense per the device / platform you’re runnin gon
22:00:01ldleworkFrom what you've said, BGFX is both a unifying pure-graphics API, but also a renderer, with internals that does draw call batching.
22:00:06zachcarteramongst other things
22:00:16ldleworkI don't see how you can't see the mututal exclusivitiy
22:00:18zachcarteryou keep saying it does draw call batching
22:00:23zachcarterI keep saying it doesn’t
22:00:32zachcarterI say it does draw call sorting and bucketing
22:00:38zachcarterand I linked to an article explaining that concept
22:00:48ldleworkI mean
22:01:01*Nobabs227 joined #nim
22:01:01ldleworkby putting artifacts into buckets
22:01:08ldleworkand running single draw calls on them
22:01:11ldleworkthat's batching dude
22:01:32zachcarterfine they do the same thing
22:01:32ldleworksomething is putting those things into those buckets, and saving draw calls because of it
22:01:38zachcarteryou’ve proven it
22:01:45ldleworkIf you are doing "sprite batching" yourself, using this "pure graphics api"
22:01:51ldleworkIE, saving your own draw calls
22:01:55ldleworkThen what is the "renderer" doing?
22:02:11ldleworkI'm just exploring the logic of what you're saying so I can understand. It seems contradictory. I'm not trying to win.
22:02:17ldleworkI don't understand your description.
22:03:10zachcarterPlease read that article I linked
22:03:24ldleworkI read it
22:03:41*Nobabs25 quit (Ping timeout: 260 seconds)
22:03:46zachcarterokay so that is quite different from what I’m describing
22:04:10*Nobabs27 joined #nim
22:04:26zachcarterthis is after your draw calls are already submitted to the renderer that this takes place
22:04:54zachcarterthat what the article describes happens
22:04:59zachcarteryou have no control over that
22:05:01ldleworkOK I think I get that now
22:05:11zachcarterso that’s what bgfx does internally
22:05:21zachcartersdl gpu would be like a layer sitting on top of bgfx
22:05:23ldleworkIt is literally keying low-level draw calls based on their parameters, then sorting them.
22:05:27zachcarterright
22:05:57zachcarterso like all these draw calls share the same material
22:05:59*Nobabs227 quit (Ping timeout: 252 seconds)
22:05:59zachcarterbucket them
22:06:04ldleworkIt doesn't know how the draw calls came to be
22:06:12zachcarterexactly
22:06:16ldleworkso you could combine approaches, which is what you were saying above
22:06:43zachcarteryeah the kind of batching that sdl gpu does is essentially batching draw calls before they go to the renderer
22:06:53zachcarterso that any calls that share the same texture
22:06:57ldleworkzachcarter: and to further prove my clarity, merely bucketing them is not "merge them into a single draw call"
22:07:00zachcarterare submitted in a big batch
22:07:08zachcarterright
22:07:10ldleworkits simply for ordering because that's some how more efficient for the cpu itself
22:07:13zachcarterit’s not batching them iit’s ordering them
22:07:14zachcarterexactly
22:07:22ldleworkI'm glad you didn't storm off :)
22:07:49ldleworkI think my confusion makes sense if you apply that language a level up and conflate it with batching as I did. Thanks for sticking it out.
22:07:50zachcarterno haha, sorry I’m not the best at clarifying my thoughts at times
22:08:11zachcartersure thing, it’s not the simplest topic either to discuss over the interwebz
22:08:27ldleworkBut now I'm more certain that I want sdl-gpu
22:08:35zachcarterI think sdl gpu is perfect for your project
22:08:37ldleworkThough BGFX is the path to javascript frontend right?
22:08:39zachcarterand a lot of people’s projects
22:08:50ldleworkOr is it
22:08:52zachcarterbgfx would be the path to getting things on as my platforms as possible
22:09:05zachcarterthings that don’t support opengl
22:09:12zachcarteralso for vulkan / metal support going forward
22:09:33zachcarterbut like you said you pay in complexity
22:09:46ldleworkkrux02: so nim-gpu ontop of bgfx, whatdya say?
22:09:51ldleworkXD
22:10:00zachcarterthat would be nice
22:10:02*Nobabs25 joined #nim
22:10:02zachcarterI’d use it
22:10:10ldleworkzachcarter: I know I've said it before so I'll stop saying since I know its annoying
22:10:17ldleworkBut you should break up frag a bit maybe..
22:10:39zachcarteroh take out the renderer?
22:10:53ldleworkYeah you have multiple useful parts that could be useful on their own I think
22:11:07zachcarterhrm I might be able to
22:11:19zachcarterI’m very close to finishing the first demo with it
22:11:33zachcarterthe one thing I’m scared of having happen, is it turning into Piston xD
22:11:33*nsf joined #nim
22:11:42ldleworkI think I'd be more open to approaching Frag to work on components rather than having to take on the whole framework.
22:12:00zachcarterpoint well taken
22:12:15zachcarterI know it’s not the simplest of projects to just like clone and get going with
22:12:19ldleworkOr utilizing them to stand up my own libraries, etc (therefore encouraging my to contriubute)
22:12:26zachcarterright
22:12:32ldleworkzachcarter: yeah and my goal is to use nim to produce the most aesthetic api possible
22:12:35*Nobabs27 quit (Ping timeout: 252 seconds)
22:12:47ldleworkto explore that space, rather than make an engine someone might use to make money some day :)
22:13:18FromGitter<Varriount> Physics is generally separate from a graphics/game framework
22:13:21ldlework(not that they are nessecarilly exclusive but obvious difficulty there)
22:13:32zachcartersure
22:13:35FromGitter<Varriount> Sounds+Graphics+Input handling is usually sufficient.
22:13:50ldleworkVarriount, I think physics is basically decoupled in frag, there's just some integration which makes sense.
22:13:59ldleworkIts nice when your sprites are easy to get bouncing off each other
22:14:08zachcarteraye physics is an optional module
22:14:10zachcarterspine is optional
22:14:11ldleworkIf you have to write that integration between frag's sprites an chimpmunk then the value goes down
22:14:58ldleworkbut I could see things like frag-bgfx frag-gpu frag-ecs frag-audio, etc
22:15:21ldleworkNim needs Interfaces...
22:18:02*Nobabs227 joined #nim
22:18:25krux02ldlework: what do you mean with interfaces?
22:18:54krux02If you mean go like interfaces, I wrote a macro for that, it's on the forums
22:19:10krux02I think andrea once put it on github
22:19:26ldleworkI saw someone's interface and it causes a stack overflow when testing against things that don't satisfy the interface
22:19:30*bjz quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
22:19:40ldleworkIt was the one andrea put on github
22:20:48*Nobabs25 quit (Ping timeout: 260 seconds)
22:21:00*Nobabs25 joined #nim
22:23:04FromGitter<Varriount> c-c-c-concepts!
22:23:33*Nobabs227 quit (Ping timeout: 240 seconds)
22:25:00ldleworkVarriount, yeah but its just a fantasy
22:25:32ldleworkThat is for concepts to grow vtable support or whatever it is
22:26:28ldleworkI have been working in a serious golang codebase the last few weeks
22:26:33ldleworkIts been pretty shocking
22:27:02ldleworkI figured that, with interfaces as its basically only good feature, that I would find tons of things implementing against interfaces but it doesn't seem that way.
22:32:09*xet7 quit (Quit: Leaving)
22:35:56*nsf quit (Quit: WeeChat 1.7)
22:37:29*Nobabs227 joined #nim
22:40:02*Nobabs25 quit (Ping timeout: 240 seconds)
22:46:40*vendethiel quit (Quit: q+)
22:53:21ldleworkVarriount, do you know if there is a way to reflect the arguments or arbitrary procs?
22:53:58FromGitter<Varriount> What kind of reflection?
22:54:17ldleworkany kind?
22:54:21FromGitter<Varriount> Can you give me pseudocode of what you want to do?
22:54:41ldleworkI want to look up a function by a string, and then get a list of its parameters including type info
22:54:54ldleworkVarriount, I'm wondering if an IoC container is even possible in Nim
22:55:04FromGitter<Varriount> IoC?
22:55:16ldleworkIoC container
22:55:21ldleworkerr inversion of control
22:56:20ldleworkVarriount, if you can tell an IoC container how to produce instances of the depenency of a client, it can then create instances of that client.
22:56:24FromGitter<Varriount> ldlework: Nim has getImpl, but I believe that only works for types. A single symbol can have multiple procedures.
22:56:38ldleworkOh right interesting...
22:56:51ldleworkWell that's OK though
22:57:20FromGitter<Varriount> The general way to go about this is to have the user mark the procedures/types they want, and generate a structure from that.
22:57:29ldleworkThat's not the general way to go about it at all.
22:57:39ldleworkAre you familiar with the implementations of IoC containers?
22:58:03FromGitter<Varriount> It's the general way of generating encompassing data structures from macros.
22:58:19FromGitter<Varriount> Not something IoC-related.
22:58:27ldleworkSure, maybe in Nim
22:58:28*Nobabs25 joined #nim
22:58:38ldleworkIn other languages no such explicit demarcation is required
22:59:54FromGitter<Varriount> In Java, don't you generally need to inherit or extend a base interface?
22:59:57ldleworkWith IoC containers, auto-wiring is the most popular modern approach which combines reflection and convention over configuration
23:00:48ldleworkI know nothing about Java, it is not the language I look at when trying to figure out where the ideals might be.
23:01:01ldleworkI would choose the state of art in C# over Java everytime
23:01:08*Nobabs227 quit (Ping timeout: 255 seconds)
23:01:49ldleworkBut I guess I'm hearing "No" which is kinda what I expected.
23:05:53FromGitter<Varriount> ldlework: I know what the general technique of inversion of control are. I don't know what you mean by an "IoC" container, since "IoC" is a general concept.
23:06:29*Nobabs227 joined #nim
23:08:07ldleworkSure but IoC containers are not. But confusion over the terminiology is part and parcel.
23:09:14*Nobabs25 quit (Ping timeout: 255 seconds)
23:09:26*Nobabs27 joined #nim
23:11:03FromGitter<Varriount> Nim has generics, concepts, methods, and references. It has macros that can automate the construction of types. It also has runtime type reflection (https://nim-lang.org/docs/typeinfo.html)
23:11:29*Nobabs227 quit (Ping timeout: 255 seconds)
23:11:53FromGitter<Varriount> I honestly don't understand what you want, because I'm not familiar enough with the terminology you're using.
23:12:36FromGitter<Varriount> An example scenario or situation would help.
23:12:58ldleworkI think you're being disingenuous
23:13:11ldleworkWhat part of "find a function by its name, and know its parameters" is strange terminology?
23:13:42ldleworkJust because you place knowing the whole domain of my problem infront of answering that simple question, does not make my language use opaque.
23:15:04FromGitter<Varriount> Use macros.getImpl
23:16:41ldleworkVarriount, is there a way to get a list of all defined procs though?
23:17:28ldleworkWhat does macros.getImpl do in face of proc overloads? I suppose I can try that one out myself.
23:17:44FromGitter<Varriount> I don't know. I can't test out anything at the moment.
23:18:06FromGitter<Varriount> ```retrieve the implementation of a symbol s. s can be a routine or a const.``````
23:18:41ldleworkYeah I read that. Does it answer the question of what happens in the face of an overload?
23:18:47FromGitter<Varriount> No.
23:18:58ldleworkWhy is everyone hasslin me today? lol
23:20:34FromGitter<Varriount> ldlework: To be blunt: Because I'm trying to work, you're asking questions that I can't confidently answer, and I don't want you to blame Nim because a strategy that works with C# and Java can't be used the exact same way in Nim.
23:21:03FromGitter<Varriount> A good question to ask is "How would this be done in C++ or C?"
23:21:52ldleworkMy question is specifically if Nim supports that strategy though.
23:23:13ldleworkSince Nim has some amount of reflection, it seems like a perfectly rational inquiry to know the extents of Nim's reflection facilities.
23:23:23FromGitter<Varriount> Yes. But it seems to me that you're trying to use an implementation method for that strategy which works best with C#
23:23:35ldleworkTrying to guess what I really care about is what leads to these estrangements not the fact that you're busy.
23:24:04ldleworkI'm literally trying to see if Nim supports that methodology. It could be the case that it did.
23:24:07ldleworkShould I have just said
23:24:24ldleworkWell, I learned this technique from C#, so obviously it is only ideal for C# and I shouldn't even try to apply these concepts anywhere else.
23:24:37ldleworkI mean what the hell are you getting at. I asked a specific question and you're playing therapist.
23:24:41ldleworkI thought you had no time.
23:27:10*peted quit (Ping timeout: 240 seconds)
23:27:11FromGitter<Varriount> Sorry
23:27:34ldleworkIf we just happened to live in a world where Nim had stronger reflection capabilities, you would have just went "Oh yeah, the thing to get all defined names is here, and here's how you look up an implementation by name." And I would've went "Oh cool."
23:27:35FromGitter<Varriount> I should have just answered the question.
23:27:45ldleworkJust because it doesn't happen to have them doesn't mean I asked the wrong question, or in the wrong way.
23:27:52ldleworkYa, you did which I appreciate.
23:28:54FromGitter<gogolxdong> @TiberiumPY I am glad to do so.
23:29:54FromGitter<Varriount> ldlework: So, does getImpl work on procedures?
23:29:56ldleworkVarriount, given that it doesn't have the services to do that way, I can still appreciate the other stuff you said too.
23:30:05ldleworkAh lets find out
23:34:28ldleworkeh I don't know how to use it properly, https://glot.io/snippets/ep1c77l88z
23:40:00*Nobabs25 joined #nim
23:41:01FromGitter<Varriount> ldlework: I'll look at it later
23:42:08*osterfisch is now known as qwertfisch
23:42:32*Nobabs27 quit (Ping timeout: 240 seconds)
23:42:49FromGitter<Varriount> @araq How would I go about using getImpl to get the parameters of a function?
23:48:46*peted joined #nim
23:51:08FromGitter<stisa> Idlework : with a single `foo` https://glot.io/snippets/ep1cl57qk4 , not sure if it can work with overloaded procs
23:52:03FromGitter<Varriount> For overloads you need to resolve the symbol somehow
23:52:28ldleworkmain.nim(14, 6) Error: type mismatch: got (proc (bar: float): float{.noSideEffect, gcsafe, locks: 0.} | proc (bar: int): int{.noSideEffect, gcsafe, locks: 0.})
23:52:30ldleworkbut expected one of:
23:52:32ldleworkmacro print(node: typed): untyped
23:53:11ldleworkthe macro we're writing should be doing the resolution
23:53:38ldleworkin the domain of my problem, by looking at the parameters and seeing which it can satisfy. Also which returns the right type, etc.
23:53:42FromGitter<Varriount> Perhaps you can use output from https://nim-lang.org/docs/macros.html#bindSym,string,BindSymRule ?
23:54:10FromGitter<Varriount> "If rule == brClosed either an nkClosedSymChoice tree is returned or nkSym if the symbol is not ambiguous. "
23:54:43ldleworkInteresting
23:54:47FromGitter<Varriount> Or in other words, if an nkClosedSymChoice tree is returned, the symbol is ambiguous
23:55:19FromGitter<Varriount> I'd look to see if an nkClosedSymChoice tree has the various pieces of data you need
23:56:58*Nobabs227 joined #nim
23:59:17*Nobabs25 quit (Ping timeout: 252 seconds)