<< 19-12-2017 >>

00:00:02FromGitter<mratsim> anything about efficient C should apply to Nim
00:00:17FromGitter<RedBeard0531> PROFILE PROFILE PROFILE!
00:01:01FromGitter<zacharycarter> @kinokasai stick to objects over refs, avoid inheritance, stick to idiomatic Nim
00:01:20FromGitter<zacharycarter> basically program like you would in C but using Nim
00:01:29FromGitter<kinokasai> What's idiomatic Nim tho
00:01:34FromGitter<kinokasai> Ah I see
00:01:50FromGitter<zacharycarter> don't be afraid of metaprogramming - abuse it
00:02:21FromGitter<zacharycarter> and take advantages of Nim's unique features - like concepts
00:02:57FromGitter<kinokasai> I remember something that threw me off. ⏎ `if foo[0] == bar:` made a copy where ⏎ `let c = foo[0]; if c == bar:` didn't
00:03:11FromGitter<RedBeard0531> This is going to sound odd, but every time I've benchmarked, --gc=markAndSweep is substantially faster (2x!) than the default (including in a benchmark of the builtin critbit tree which is *very* similar to your use case). Make sure you try out the different --gc options, and don't just take my word for it.
00:03:39FromGitter<zacharycarter> http://blog.johnnovak.net/2017/04/22/nim-performance-tuning-for-the-uninitiated/
00:04:13Araqbootstrapping is slower with mark and sweep and uses more memory
00:04:31FromGitter<zacharycarter> @kinokasai you can always open up the generated C code and figure things out yourself - I plan to eventually add assembly output to https://play.nim-lang.org/
00:04:46FromGitter<zacharycarter> I'm just lazy and slow
00:05:04FromGitter<kinokasai> Oh, I'll defintely try different gc options. ⏎ That blog seems like an interesting read, gonna get to it
00:05:11FromGitter<mratsim> @kinokasai this is a could starting point: http://leto.net/docs/C-optimization.php
00:05:11Araqyou can look at the C code if you know something about C compilers
00:05:21FromGitter<RedBeard0531> never benchmark or profile anything without -d:release. Consider using "--passC=-flto --passL=-flto". Also read https://nim-lang.org/docs/nimc.html#optimizing-for-nim
00:05:41Araqbut I would look at the assembly code instead
00:06:10Araqmost people I've seen who looked at the C code had no fucking clue how an optimizer works
00:06:16FromGitter<zacharycarter> Araq, since you're around - what do you think would be the best way to create a task / work scheduling system with Nim?
00:06:26FromGitter<zacharycarter> using channels or something else?
00:06:45Araqdepends
00:06:53FromGitter<kinokasai> Yeah, I've actually looked at the C output at some point - it's more readable than assembly - but good point about optimisations done by the C compiler
00:06:54FromGitter<RedBeard0531> Also try "nim cpp" in addition to "nim cc". The cpp defaults to using a much more efficient exception handling impl in the happy-path (that is slower in the throwing case)
00:07:45Araqfor games I would use closure iterators, zachary
00:07:55FromGitter<mratsim> For C/assembly godbolt is your friend: https://godbolt.org/
00:08:16FromGitter<kinokasai> wow
00:08:47FromGitter<RedBeard0531> Has anyone tried getting Matt Godbolt to add nim to godbolt.org?
00:08:51FromGitter<zacharycarter> Araq: thanks, wasn't even aware such a thing existed
00:09:14FromGitter<RedBeard0531> If you use --lineDir:on it can actually map each instruction back to a nim line
00:09:22Araqhey, they were designed for games and then we figured we can use them for async :P
00:09:36FromGitter<zacharycarter> :)
00:10:04FromGitter<RedBeard0531> --lineDir:on is implied by --debugger:native which makes the linux perf tool *much* more useful
00:10:44Araqthey also suck quite a bit for games so I invented .liftLocals which is not yet documented ... :P
00:12:38AraqRedBeard0531: Mark&Sweep is slower for gcbench too iirc but not by much
00:12:48GitDisc<treeform> is there a https://godbolt.org/ for nim?
00:13:01FromGitter<RedBeard0531> https://wandbox.org/ already has nim support!
00:14:18GitDisc<treeform> Thats cool too. But its not that same.
00:16:03Araqwow Intel uses the 'leave' instruction
00:16:09AraqI wonder what's up with that
00:16:09FromGitter<kinokasai> shit I didn't know nim had typeclasses
00:17:00FromGitter<kinokasai> I should read the language manual for real at some point
00:18:24FromGitter<RedBeard0531> @Araq interesting! I wonder why there is so much difference for me. This was my simple benchmark: https://gist.github.com/RedBeard0531/0a1ebf92fd7c861dec1ee9b21c795827. It just tests inserting into a critbit or hashtable and then querying it.
00:19:00FromGitter<RedBeard0531> The load phase is twice as fast with mark and sweep and the queries are ~20% faster
00:19:59Araqit's not a GC test, you don't allocate much in the query phase and so you pay for the write barrier
00:20:29Araqthe refcounting GC does pointless work, the Mark&Sweep GC none
00:20:38Araqbut that's just my guess
00:24:01FromGitter<RedBeard0531> Sure, but is just one example. I have noticed a
00:24:52FromGitter<RedBeard0531> Pattern that ever
00:25:36FromGitter<RedBeard0531> Damn phone auto sending
00:26:27*gokr quit (Ping timeout: 256 seconds)
00:29:31FromGitter<RedBeard0531> Pattern that every time I try multiple gcs, including on real programs, either they are all the same or mark and sweep is fastest by a large margin. As a professional c++ dev I find that curious and disturbing, but I try to bow to evidence. I'm not saying it always will be fastest, but it should probably be a tool that someone optimizing a nim program should keep in mind.
00:31:05GitDisc<NopeDK> It does indeed look like I have a failed wrapping somewhere since my basic test worked without a hitch... That will be tomorrow's fun. G'night all and thanks for all your help again today.
00:32:41AraqRedBeard0531: would be interesting to compare a real C++ project that uses smart pointers to one that uses raw pointers and the Boehm GC
00:35:09Araqbut it's widely known good GCs are faster until you replace malloc with region based memory management
00:37:28FromGitter<data-man> Hi! ⏎ Is there anyone with a Windows version less than Windows 10?
00:37:33Araqnot that Nim's GC is faster than malloc though -.-
00:38:21FromGitter<RedBeard0531> Yeah, it would need to be one that doesn't take advantage of the refcount to do COW-optimizations when count=1. Which rules out testing that out at my day job.
00:39:55AraqCOW is still a thing?
00:40:14Araqwasn't it replaced with move semantics? ;-)
00:41:39FromGitter<RedBeard0531> Hmm, @araq I wonder if that optimisation could be applied to nim seq and string. Copies would be free but mutations would need to check the refcount and clone if shared
00:42:31Araqcan't work, that stupid refcount doesn't count references on the stack ;-)
00:42:39FromGitter<RedBeard0531> That turned out to be a HUGE win at work
00:42:50FromGitter<RedBeard0531> Oh, damn
00:43:01Araqmove semantics are coming though
00:44:59Araqdata-man: I used to have Win XP in a VM but not anymore
00:45:27FromGitter<RedBeard0531> Yeah, I read your post and was curious about some of the details. I'll need to pick your brain on that when I have a real keyboard
00:47:33FromGitter<data-man> @Araq: Can you try, please? ⏎ ⏎ ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=5a3861a5540c78242ddf2d5e]
00:48:02FromGitter<RedBeard0531> On a related note any chance we'll get nil strings/seq that act the same as empty anytime soon? I think I remember seeing something about that.
00:48:05Araqno, told you.
00:49:05AraqRedBeard0531: that change caused random hard to track down bugs and I got tired of working on it
00:49:43Araqespecially since I'd like to rewrite strings/seqs implementation anyway to work with destructors
00:50:57FromGitter<RedBeard0531> I don't remember asking before. Good to know I shouldn't hold out hope for that change.
00:51:18Araq"no, told you" was a reply to data-man
00:51:27*marenz__ quit (Ping timeout: 240 seconds)
00:51:46Araqwell there is always hope, I could push my branch and let you fix it :-)
00:52:41*Serenitor joined #nim
00:53:56SerenitorI don't like that system./(x, y: int) returns float and I'm not fond of using div instead, is there anything I can do without getting ambiguous call error?
00:54:34Araqheresy, 'div' is nice
00:54:57AraqPython had to introduce // you can template it
00:55:01FromGitter<data-man> Oh, why no? You haven't Nim in a VM?
00:55:18Araqno, I have no Win XP VM anymore
00:55:24FromGitter<zacharycarter> @data-man : `*<Araq>* @data-man: I used to have Win XP in a VM but not anymore`
00:57:05Serenitormh, I like the // idea, thanks
00:57:55FromGitter<RedBeard0531> Araq I'm tempted, but I've got enough on my plate already☺
01:01:15FromGitter<data-man> @zacharycarter: Спасибо, я это уже прочитал :)
01:01:32FromGitter<zacharycarter> Araq - when you mentioned closure iterators earlier - they'd still need some way of being scheduled right?
01:02:33FromGitter<zacharycarter> @data-man gotcha, my bad. I thought you had missed that message. BTW - I don't speak Russian
01:03:02FromGitter<zacharycarter> you were just suggesting a way to encapsulate the task?
01:12:12FromGitter<RedBeard0531> @Araq, this was my real-world, allocation-heavy testcase: https://gist.github.com/RedBeard0531/17c08ac262d8c4f62b130cee8dd50240. It parses 2400 make files totaling 55MB and calls getmodificationTime on each unique dep. default gc takes 489ms, markAndSweep 345ms.
01:12:33FromGitter<RedBeard0531> I'm actually curriouswhat cases it does poorly on.
01:16:58FromGitter<Varriount> @RedBeard0531 Did you test using the new allocator vs the old one?
01:20:11AraqI doubt the new allocator makes a difference
01:23:24*MJCaley joined #nim
01:31:15*radagast quit (Ping timeout: 272 seconds)
01:42:41FromGitter<RedBeard0531> gc:v2 is about the same time as the default (482 vs 489ms) but much higher peak RSS (14M vs 7M). For comparison, markAndSweep was 344ms/16MB, boehm was 400ms/13MB, and none was 414ms/302MB. I haven't been able to get gc:go to work right. Any others worth trying?
01:47:36*skrylar quit (Remote host closed the connection)
01:51:11Araqgc:v2 is a realtime GC for some meaning of realtime
01:53:54*Serenit0r joined #nim
01:54:31FromGitter<RedBeard0531> congrats on implementing a very fast tracing gc! :)
01:57:21*Serenitor quit (Ping timeout: 264 seconds)
01:57:39FromGitter<RedBeard0531> what's the deal with the std libs not listed on the doc/lib.html page like bitops and sharedtables? Are they not official yet, or is that just an oversight?
02:08:37*chemist69 quit (Ping timeout: 272 seconds)
02:19:48*vlad1777d quit (Ping timeout: 240 seconds)
02:22:00*chemist69 joined #nim
02:23:06FromGitter<ephja> they might have been added after the last release
02:33:08*sz0 quit (Quit: Connection closed for inactivity)
02:57:42*Serenitor joined #nim
02:57:47*sz0 joined #nim
03:00:27*Serenit0r quit (Ping timeout: 240 seconds)
03:01:06*SenasOzys quit (Ping timeout: 260 seconds)
03:06:02*MJCaley quit (Quit: MJCaley)
03:11:50*SenasOzys joined #nim
03:43:41*dddddd quit (Remote host closed the connection)
03:57:14FromGitter<kayabaNerve> Araq: Optimizer? You mean I shouldn't have the game loop redo each object instance and redo every object based off it's previous form, call the necessary functions to edit it, and then not use delete/pointers, while also having multi-nested for loops?
03:57:43*Serenitor quit (Read error: Connection reset by peer)
03:58:08*Serenitor joined #nim
03:58:09FromGitter<kayabaNerve> I'm pretty sure a vector for each object with me pushing the most recent version to the end of it is industry standard and it's bad practice to not do so
04:33:42*Senketsu_ joined #nim
04:38:34*Senketsu_ quit (Quit: Lost terminal)
05:12:28*SenasOzys_ joined #nim
05:12:32*SenasOzys quit (Read error: Connection reset by peer)
05:46:00*Serenitor quit (Quit: Leaving)
05:52:01*qih joined #nim
05:52:20*qih left #nim (#nim)
05:57:08*themagician quit ()
05:57:17*sz0 quit (Quit: Connection closed for inactivity)
06:00:10*Snircle quit (Quit: Textual IRC Client: www.textualapp.com)
06:03:12*nsf joined #nim
06:22:14*radagast joined #nim
06:25:34*ieatnerds joined #nim
06:35:11*ieatnerds quit (Ping timeout: 248 seconds)
06:56:41*gokr joined #nim
07:03:18*redlegion quit (Ping timeout: 246 seconds)
07:04:14*redlegion joined #nim
07:04:14*redlegion quit (Changing host)
07:04:14*redlegion joined #nim
07:06:17*enthus1a1t quit (Ping timeout: 272 seconds)
07:10:13*solitudesf joined #nim
07:13:50*SenasOzys__ joined #nim
07:13:59*SenasOzys_ quit (Read error: Connection reset by peer)
07:39:11*enthus1ast joined #nim
07:55:40*yglukhov joined #nim
08:13:44FromGitter<mratsim> @RedBeard0531 I tried copy-on-write in arraymancer, for my use cases it was a bad idea: see https://forum.nim-lang.org/t/3355#21128 and https://github.com/mratsim/Arraymancer/issues/157
08:16:11FromGitter<mratsim> The main killer for me is that in neural networks, tensors are wrapped in a graph, which messes up with the refcount depending if it was assigned to a variable before put in the graph or not.
08:16:43radagastNot related to Nim, but I am thinking on a problem. Consider that I have been given `N` number of points in a 2D plane. I have to sort the points in a way that if I join the points, the locus created as a result is not going to intersect/cross itself. I have to detect if it's not possible as well.
08:17:27radagastIs there any known algorithm for this?
08:18:18FromGitter<mratsim> most things that have to do with routing needs dynamic programming. But you can also check CGAL doc (Computational Geometry) if something remotely resembles to that
08:20:02FromGitter<mratsim> I’m sure there is, to optimize path/logistics. Maybe start with info from the salesman problem: https://en.wikipedia.org/wiki/Travelling_salesman_problem
08:21:01FromGitter<andreaferretti> @radagast sort by first coordinate?
08:22:49radagast@andreaferetti That's what I thought initially but if the locus is spiral shaped, the second coordinate comes into play. @mratsim Thanks, I'll look into that
08:25:20*claudiuinberlin joined #nim
08:25:42*Arrrr joined #nim
08:33:24*yglukhov quit (Remote host closed the connection)
08:36:50*PMunch joined #nim
08:37:02FromGitter<andreaferretti> I am pretty sure that if you sort by first coordinate and join by segments the consecutive points the resulting locus does not self interesect
08:37:17FromGitter<andreaferretti> unless you want to also join the last and the first point
08:38:51*GaveUp quit (Ping timeout: 260 seconds)
08:39:21*radagast quit (Ping timeout: 264 seconds)
08:39:22*GaveUp joined #nim
08:41:49*radagast joined #nim
08:46:23*yglukhov joined #nim
08:47:29FromGitter<gogolxdong> Is there a HMAC and sha256 implementa
08:49:54*floppydh joined #nim
08:51:03FromGitter<gogolxdong> @Varriount around? what the name of hmac and sha256 nimble package?
08:51:25*dddddd joined #nim
08:51:56Araqnimble search sha
08:53:40FromGitter<gogolxdong> thanks
08:54:51PMunchHere's sha: https://github.com/jangko/nimSHA2
08:55:26PMunchAnd this looks like HMAC: https://github.com/OpenSystemsLab/hmac.nim
09:02:59*gmpreussner quit (Ping timeout: 268 seconds)
09:05:03*gmpreussner joined #nim
09:20:18*Arrrr quit (Read error: Connection reset by peer)
09:21:22*SenasOzys__ quit (Ping timeout: 255 seconds)
09:30:57*SenasOzys__ joined #nim
09:40:09*Vladar joined #nim
09:54:28*Arrrr joined #nim
09:55:36yglukhovAraq: mind merging https://github.com/nim-lang/Nim/pull/6942 pls?
09:56:29*Yardanico joined #nim
10:06:04FromGitter<tim-st> Is there an easy way to create a new hash table of type [V, K] with all elements from an existing hash table of the form [K, V] ?
10:08:20yglukhovAraq: thank you
10:09:57yglukhovtim-st: proc reverse[K, V](t: Table[K, V]): Table[V, K] = result = initTable[V, K](); for k, v in t: result[v] = t
10:10:21yglukhoveasy enough? ;)
10:10:24FromGitter<tim-st> @yglukhov nice, thanks!
10:12:13Araqwe need a better hash table
10:13:46FromGitter<tim-st> Isn't the performance for lookup in O(1) ?
10:14:24FromGitter<tim-st> (time)
10:14:46Araqwell ok, we don't need a better hash table
10:14:52Araqwe need a better container
10:15:43FromGitter<tim-st> I like the python style dict where I can just write {...} and it's done :)
10:16:41Araq{...}.toOrderedTable -- is that so bad?
10:16:50FromGitter<tim-st> a bit, yes
10:16:56Araqlol
10:16:59FromGitter<data-man> @Araq: A better container for any type?
10:17:26FromGitter<tim-st> @Araq but it's quite good and better than golang and others
10:18:01Araqyou can shorten it to %{...} with a % template
10:18:37FromGitter<tim-st> Nice
10:19:08Araqbut idiomatic Nim doesn't use that many hash tables
10:19:13Araq;-)
10:21:05FromGitter<mratsim> Insert obligatory "Araq: I would use a sqlite database here"
10:22:41FromGitter<tim-st> Sqlite is not constant, I like that Nim can make Table const, also sqlite needs a dll and my table size is like 100 entries only
10:24:28Araqmratsim, not what I am talking about
10:24:49FromGitter<tim-st> you meant case?
10:25:50AraqI'm talking about a data structure that keeps its date in seq[(key, value)] but also offers hash indices into the 'key' and 'value' spaces, depending on the number of elements and usage patterns
10:26:23Araqis that called a Bitable? no idea
10:27:14Araqideally I could declare any kind of indices like I can for database tables
10:28:07*kier quit (Remote host closed the connection)
10:28:28FromGitter<mratsim> I that would be useful, everytime I used hashtables I also used a seq to keep track of the keys I had.
10:28:36FromGitter<mratsim> Ah*
10:29:34*kier joined #nim
10:29:41Araqhuh? that's bad. what's wrong with the 'keys' iterator?
10:34:30FromGitter<data-man> Boost has multi_index_container. And this container ported to D.
10:39:04FromGitter<gogolxdong> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=5a38ec48232e79134d6998fb]
10:40:59FromGitter<gogolxdong> ` ⏎ {.this: self.} ⏎ proc generateUrl(self :ptr QcloudApiCommonRequest,paramTable:Table,secretId,secretKey,requestMethod,requestHost,requestPath:string)= ⏎ ⏎ ```code paste, see link``` ... [https://gitter.im/nim-lang/Nim?at=5a38ecbbba39a53f1a7b0376]
10:42:30*mal`` quit (Quit: Leaving)
10:43:51FromGitter<gogolxdong> sorry , dumb me, I got syntaic hint undeclared url and expected else at the beginning of the last line
10:52:12FromGitter<mratsim> @Araq I need a stack to pop from the end usually.
10:52:33*mal`` joined #nim
10:56:38Araqmratsim: give ordered tables a pop proc?
10:57:47FromGitter<mratsim> For graph of operations like Monte-Carlo Tree Search backpropagation and Deep Learning backpropagation, you keep parents/child relations (keys) in a Graph, the implementation/value in methods/closures/hashtables and the order in a stack (seq).
11:00:15FromGitter<mratsim> That's an idea, I've only implemented MonteCarlo Tree search in Rust so far ;) and for Deep Learning backpropagation I don't use hashtables.
11:12:46FromGitter<data-man> @mratsim: Have you tried https://github.com/Vladar4/FastStack?
11:17:58FromGitter<mratsim> @data-man interesting, I have no use case for it currently though. I either need a graph (neural net) or a stack of (ByteAddress, int, int) for object pool with (pointer address, data size, timestamp) but the stack is probably very small so normal seqs works fine.
11:34:22PMunchHmm Nimgame2 depends on SDL_Image 2.0.2?
11:34:29PMunchWhich is 8 weeks old..
11:35:04YardanicoPMunch, what's bad about that?
11:35:28FromGitter<alehander42> Araq: discardable is not honored when a call to such function is the last in a non-void procedure
11:35:31FromGitter<alehander42> is that by design?
11:35:50PMunchYardanico, I don't have it in my package manager :(
11:35:58YardanicoPMunch, try to install latest one
11:36:05Yardanicoit will work :)
11:36:10Yardanicoif you have 2.0 in your package manager
11:37:08PMunchHmm, seems latest I can get is 2.0.1
11:37:28Araqalehander42: yeah, discardable is more like 'void' than a type in this context. we had problems with macro's add otherwise
11:37:53YardanicoPMunch, well I think it will work
11:38:05YardanicoPMunch, and where did you really see what it depends on 2.0.2?
11:38:06PMunchNope, it's missing some SVG features
11:38:23YardanicoPMunch, did you try ? :)
11:38:26PMunchI'm getting an error: could not import: IMG_isSVG
11:38:34PMunchWhich was added in 2.0.2
11:39:01YardanicoPMunch, well you can compile it manually
11:39:31FromGitter<alehander42> @Araq ok, then I'll add a little fix for `asyncjs` to make Future[void] more robust in those cases
11:39:36PMunchOf course
11:41:44YardanicoPMunch, it's strange that newest sdl image is not in your package manager :(
11:42:10Yardanicowell not really strange
11:42:34PMunchIt is a bit strange. This is supposed to be a rolling release distro
11:42:41PMunchOpenSUSE Tumbleweed
11:42:54Yardanicoah, IDK then
11:44:10PMunchHuh, even Arch doesn't have 2.0.2: https://www.archlinux.org/packages/?sort=&q=sdl2_image&maintainer=&flagged=
11:44:44Yardanicobut it's 2.0.1-2
11:45:14Yardanicowell IDK
11:45:28YardanicoI don't really think that nimgame2 requires sdl image 2.0.2
11:45:43Yardanicoprobably https://github.com/Vladar4/sdl2_nim requires it
11:45:56Yardanicobut nimgame2 doesn't use it afaik
11:46:02Yardanico(I mean svg features)
11:47:18PMunchHmm yeah, that's fair
11:47:33PMunchBut by extension nimgame does depend on it
11:49:04Yardanicoyou can manually edit sdl2_nim package
11:49:13Yardanicojust remove everything that was added there: https://github.com/Vladar4/sdl2_nim/commit/678f44794e5a920a08ca19db41059c0ccbe72c09#diff-e4bd93e1e4a6cbc171db79eae1fc9a0a
11:49:45*arecaceae quit (Remote host closed the connection)
11:50:32*arecaceae joined #nim
11:51:33PMunch--deadCodeElim:on solved it :)
11:58:40PMunchHmm, the various demos keep crashing..
12:18:29FromGitter<mratsim> @Araq I can trigger reliably this error when allocating/computing tensors (sequences) of 100000 * 200 elements in a loop. When the process reach 2GB of RAM used I get Attempt to read from nil: ⏎ ⏎ ```code paste, see link``` ⏎ ⏎ Is 2GB a trigger for garbage collection? [https://gitter.im/nim-lang/Nim?at=5a3903955355812e57f32419]
12:20:50Yardanicobtw, I've installed firefox nightly to see how it compares with google chrome (I've used chrome for a long time)
12:21:17FromGitter<mratsim> @Araq trying on 0.17.2 I don’t have the same crash, so time to git bisect I guess
12:21:27Araq2GB is the maximal allocatable block
12:21:46Araqfor TLSF, probably should have added a runtime check
12:22:04Araqmaybe I can make it work for sizes bigger than that
12:23:26Araqwhat's there to bisect? we got a different allocator
12:25:24FromGitter<mratsim> ah I see.
12:26:59Araqbut are your chunks really 2 GB in size?
12:28:04FromGitter<mratsim> Well in my case, each tensor is about 180MB and could be released after a single computation, but I see the memory grow to 2GB (after 12~13 loops) before failing while trying to free memory.
12:28:14*Snircle joined #nim
12:29:48FromGitter<mratsim> ```code paste, see link``` ⏎ ⏎ 1000000 * 200 * 8 bytes (floats) = 160MB + a couple metadata [https://gitter.im/nim-lang/Nim?at=5a39063b540c78242de222d6]
12:30:28FromGitter<mratsim> 160 * 13 = 2080 aka 2.08 GB
12:40:27*Sentreen joined #nim
12:44:37Arrrrwhat do you think about prolog?
12:47:24FromGitter<alehander42> I prefer it as a library in a "normal" language
12:47:38FromGitter<alehander42> as core.logic (clojure)
12:52:25Araqthe limit is 2GB for individual allocations
12:52:35Araqnot as the final heap size
12:52:46Araqreport it properly please, it shouldn't crash
12:54:46FromGitter<mratsim> I’m trying to reproduce with seq or shallow seq, but it’s giving me a hard time :/
12:59:03FromGitter<mratsim> will try again in the evening or tomorrow
13:13:21FromGitter<Varriount> Araq: So no 4GB sequences?
13:14:48*xkapastel quit (Quit: Connection closed for inactivity)
13:15:41*yglukhov quit (Remote host closed the connection)
13:16:41*SenasOzys__ quit (Read error: Connection reset by peer)
13:16:54*SenasOzys__ joined #nim
13:26:06*Arrrr quit (Read error: Connection reset by peer)
13:39:56FromGitter<RedBeard0531> @mratsim Our usage of COW is only for flat buffers of serialized data that you can append to when refcount=1, and for a deserializetion of BSON (like JSON) into a mutable tree/DAG. One of the fun things about refcount>1 COW is that it allows a tree to become a DAG (a.c = a.b can share a.b's data) but prevents the DAG from degrading to a graph with cycles (a.b = a COWs a first then assigns the old to the new).
14:12:33*BitPuffin|osx joined #nim
14:14:03*yglukhov joined #nim
14:31:04*yglukhov quit (Remote host closed the connection)
14:39:28*couven92 joined #nim
14:41:18*yglukhov joined #nim
14:42:09couven92Araq et al., for PR #6301 what names do you suggest for a new module that contains editDistance? Or maybe put it into algorithms?
14:46:41Araqeditdistance.nim
14:54:07*ieatnerds joined #nim
14:54:42*ieatnerds quit (Client Quit)
15:00:21subsetparklike levenshtein distance?
15:01:08couven92subsetpark, yup
15:01:22subsetparkIsn't that a bit esoteric for a stdlib?
15:01:41couven92well, it has always been there
15:01:44FromGitter<tim-st> @Araq maybe there is a small chance for nimlang here: https://pineapplefund.org ? Maybe it's worth a try, since a programming language is the base for many important things... Read this a few days ago and remembered now.
15:01:54couven92I just converted it to make it unicode aware
15:02:07couven92and Araq wanted it out of unicode, since it's so rarely used
15:02:22Yardanicotim-st: yeah I shared this link here too
15:02:35Yardanico@Araq you can fill a quick form on this website
15:02:42Yardanicohttps://formlets.com/forms/ikppeNKjuxeHzzba/
15:02:57FromGitter<tim-st> @Yardanico Ok, good. I'm pretty sure, nimlang is a candidate...
15:03:14FromGitter<tim-st> although not a real charity.
15:03:47FromGitter<RedBeard0531> @couven92 sorry to be "that guy" but does it compute the edit distance in terms of codepoints or graphemes?
15:04:22FromGitter<RedBeard0531> I've only used it with ascii inputs, but every time I hear unicode that bell rings in my head now :)
15:04:36couven92@RedBeard0531 good question, it uses code-point distance
15:04:57couven92i.e. Age -> Åge has a distance of 1
15:05:16couven92(for ASCII that would be 2)
15:06:13FromGitter<RedBeard0531> I guess normalizing helps the decomposed A + ring-above case
15:07:03couven92oh, right... yeah, I am actually not sure how that goes... Hmm, let me test
15:07:53FromGitter<RedBeard0531> I know there are some graphemes that can't be composed into a single codeunit.
15:11:38*MJCaley joined #nim
15:14:16couven92Okay, so edit A + ring above to A has a editDistance of 2... Hmmm
15:15:30couven92so basically we need a much better Unicode-normalizer
15:15:37FromGitter<RedBeard0531> Sorry, didn't mean to create more work for you. Asking about graphemes is basically a reflex to me at this point...
15:16:26couven92@RedBeard0531, hehe! :) It's fine... At least I'm not getting bored over Christmas :P
15:16:40FromGitter<RedBeard0531> @couven92 See https://nim-lang.org/docs/unicode.html#graphemeLen,string,Natural
15:19:14FromGitter<RedBeard0531> It may be worth reading https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-unicode-code-points/
15:19:49FromGitter<data-man> IMO, to perform the normalization for editDistance is wrong.
15:20:12couven92@data-man: ?
15:22:27FromGitter<data-man> "Age -> Åge has a distance of 1" is correct
15:22:47subsetpark data-man: Normalization is required for that sentence to be ture
15:22:49subsetparktrue
15:22:50FromGitter<RedBeard0531> Crazy idea: could you make editDistanceT (a, b: openArray[T])? Then if the user wants it in bytes, they can pass strings, if they want codepoints, they can pass seq[Rune], and if they want graphemes, they can pass seq[String].
15:23:18FromGitter<mratsim> Or use a concept ! ;)
15:23:34FromGitter<RedBeard0531> that was supposed to say `editDistance[T](a, b: openArray[T]`
15:23:34FromGitter<mratsim> Or a typeclasse maybe.
15:23:54FromGitter<data-man> And editDistance go to...sequtils lol
15:24:39FromGitter<mratsim> Huh, that would be very strange.
15:24:42FromGitter<RedBeard0531> IIRC the algorithm also works on seq[int]. I'm sure someone will come up with a fun use-case for that :)
15:26:14FromGitter<data-man> @subsetpark: ⏎ ⏎ > edit distance is a way of quantifying how dissimilar two strings (e.g., words) are to one another by counting the minimum number of operations required to transform one string into the other.
15:26:22FromGitter<data-man> From wiki
15:26:35subsetparkyes, i know :)
15:27:33couven92Question is: Should `editDistance("Å", "A\xCC\x8A") == 0`?
15:27:41FromGitter<mratsim> @RedBeard0531 I see a use case to validate typo in number fields in forms.
15:28:03subsetparkbut the reason unicode normalization was raised is that Å can be represented as a single unicode character or as two (an A with a combining character) - so if the unicode is not normalized, then levenshtein(Å) will be either 1 or 2 depending on the specific realization of that same grapheme
15:29:00FromGitter<RedBeard0531> Now I'm questioning whether my use should be in terms of bytes, Runes, or graphemes. I'm using it for "did you mean?" for build targets which are file paths entered by a user. That means that they don't *have to* be utf-8, but it also means that a user is pressing keys on a keyboard...
15:30:09FromGitter<data-man> For validate typo the Damerau–Levenshtein distance is more suitable
15:30:26couven92yes exactly... In my opinion, we should normalize... But I could be talked into having a bool flag `noNormalize` if you really want to
15:30:42couven92(which would be `false` by default)
15:31:10FromGitter<mratsim> Sounds reasonable.
15:31:22FromGitter<RedBeard0531> what will it do on non-utf-8 inputs when normalize is off?
15:31:28subsetparksurely it would be `normalize`, which is `true` by default?
15:31:55FromGitter<RedBeard0531> or is that a "don't do that!" kind of thing?
15:32:02couven92@RedBeard0531 you'd have a editDistanceBytes for such cases
15:32:32couven92And you could use the current ASCII-only variant in strutils for that
15:40:49FromGitter<data-man> Anyone uses the colors module? ⏎ I converted the list of colors from the Wiki to Nim constants. There are more than 1500 color names. 
15:44:39yglukhovAraq, dom96: always wanted but was afraid to ask. why {.async.} macro requires return type to be Future[T] instead of transforming it automatically?
15:45:04FromGitter<mratsim> Only? I remember about "Bordeaux, Vermillion, ..." Just for red ...
15:46:22FromGitter<zacharycarter> I don't use the color module, I haven't found it very useful
15:47:11FromGitter<zacharycarter> IMO it'd be more useful if it had convenience procs for converting between RGB colorspace, hex color space and floating point color space
15:47:16*Vladar quit (Quit: Leaving)
15:47:21FromGitter<zacharycarter> I know there is some of that already in the module, but not enough to make it worth using / easy to use
15:48:37*couven92 quit (Quit: Client disconnecting)
15:49:45FromGitter<alehander42> @yglukhov just giving an example: in my private project initial async impl for js, I was converting the return type automatically to `Future[T]`, but it seemed somehow .. too magical, and assimetric. e.g. in forward definitions, I had to use `Future[T]` anyway
15:50:46yglukhovalehander42: but you can apply {.async.} pragma to fwd decls, that would feel more natural
15:51:35yglukhovbecause pragmas usually have to be applied to fwd decls anyway
15:52:22*noonien joined #nim
15:54:49FromGitter<zacharycarter> What do you guys think about putting up some type of section in the FAQ that addresses common criticisms / complaints about Nim - basically provides information about how the language is being improved based on feedback
15:55:19FromGitter<zacharycarter> I still run into ex-Nimmers who spread FUD about the language, I ran into one ex user this morning claiming Nim was a failed programming language :/
15:55:42FromGitter<alehander42> yes, it's possible. otherwise the only other thing I can think of is e.g. futures within futures would be extremely confusing (e.g. return `Future[int] {.async.}` ) but I haven't had this usecase until now
15:56:01FromGitter<alehander42> but I would be interested in the answer too
15:57:38FromGitter<RedBeard0531> `Future[Future[Int]]` should just be collapsed to `Future[int]`. AFAICT there is never a reason to have nested futures outside of the implementation of collapsing.
15:59:40FromGitter<andreaferretti> Please, never collapse futures implicitly!
15:59:51FromGitter<andreaferretti> This just breaks generic programming
16:00:23FromGitter<andreaferretti> Say you have a function `A => B`. Normally you can lift it to a function `Future[A] => Future[B]`
16:00:27FromGitter<alehander42> I think there are possible usecases, I don't agree they are equivalent
16:00:45FromGitter<andreaferretti> but if `B` happens to be `Future[C]` this now breaks
16:01:27FromGitter<andreaferretti> I think there was a discussion exactly about collapsing options implicitly, basically for the same reaons
16:01:45FromGitter<andreaferretti> well, about *not* collapsing options implicitly
16:02:29FromGitter<andreaferretti> futures, lists, options.... are all monads and they have some compositional behaviour that is broken as soon as someone makes some implicit collapsing
16:02:38FromGitter<andreaferretti> as it happens -say - in javascript futures
16:03:56FromGitter<RedBeard0531> But if `Future[Future[B]]` and `Future[B]` are the *same type*, it may be fine (not sure if it is possible in nim, I've thought about this much more in the context of C++'s type system, although I'm still exporing the design space)
16:04:52Yardanicozacharycarter: https://github.com/nim-lang/Nim/wiki/Common-Criticisms ?
16:05:42FromGitter<zacharycarter> @Yardanico I guess maybe it just needs some updating?
16:05:49Yardanicoprobably
16:06:41FromGitter<RedBeard0531> Lift is an interesting case because I *think* you'd want A -> Future[B] lifted to Future[A] -> Future[B]
16:07:42FromGitter<RedBeard0531> Then you can lift without caring if the function returns B or Future[B], since the net effect is the same
16:08:04*nsf quit (Quit: WeeChat 1.9.1)
16:11:09yglukhovI'd propose the following logic for async macro: if return type is Future[T], keep it intact. otherwise, transform it to Future[RetType]. This way, defining Future[Future[T]] is still possible
16:11:31yglukhovand non-future return types (which make no sense anyway) would be properly handled
16:15:28FromGitter<alehander42> I agree with @andreaferretti
16:15:55FromGitter<alehander42> @RedBeard0531 Future[A] and Future[Future[A]] are not the same type, why would they be?
16:19:32FromGitter<andreaferretti> @RedBeard0531 `A -> Future[B]` to `Future[A] -> Future[B]` is usually called `flatMap`
16:19:43FromGitter<andreaferretti> and it is the composition of `map` and `flatten`
16:20:11yglukhovmy suggestion doesn't interfere with andreaferretti, or am i missing something...
16:20:55FromGitter<alehander42> @RedBeard0531 I can have a list of handlers which are Future[A] themselves and return such a handler based on another await from a function with type `Future[Future[A]]`
16:21:06FromGitter<andreaferretti> @RedBeard0531 the problem with your approach is *exactly* that you happen to do different things according to whether `B` is a future
16:21:14FromGitter<andreaferretti> which makes metaprogramming inconsitent
16:21:24FromGitter<andreaferretti> now you have to handle special cases each time you touch futures
16:21:55FromGitter<alehander42> @yglukhov I agree with him about the discussion about collapsing, otherwise your idea sounds reasonable
16:22:25FromGitter<andreaferretti> @yglukhov I *think* one can make the async macro handle the two cases
16:22:33FromGitter<dustinlacewell> I see we’re discussing monads
16:22:38FromGitter<andreaferretti> but I would have to see an implementation to tell
16:22:56FromGitter<andreaferretti> @dustinlacewell yup
16:23:14FromGitter<mratsim> Please don’t do implicit stuff >_>, there are elegant ways to compose/lift Monads without breakage in other code due to implicit conversion
16:23:15FromGitter<dustinlacewell> I have been studying monads for a few weeks now.
16:23:49FromGitter<dustinlacewell> I bet Nim could have sweet support for monadatic infix operators
16:23:57FromGitter<dustinlacewell> (or does)
16:24:16FromGitter<mratsim> Or you can create a `converter (a: Future[Future[T]]): Future[T] = … ` that will do the implicit conversion in your code
16:24:43*Arrrr joined #nim
16:24:43*Arrrr quit (Changing host)
16:24:43*Arrrr joined #nim
16:24:43FromGitter<mratsim> @dustinlacewell Arraymancer Tensors are monadic
16:25:24FromGitter<dustinlacewell> I just meant like >>= and <$> and <*> and stuff
16:25:31FromGitter<mratsim> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=5a393d7a232e79134d6b8ce5]
16:25:51FromGitter<mratsim> Ah right, there is a RFC about how to name “bind” vs `>>=`
16:26:12FromGitter<dustinlacewell> An RFC?
16:26:32FromGitter<dustinlacewell> Seems like with operator overloading you don’t need language support
16:26:43FromGitter<dustinlacewell> So name it whatever you want :)
16:27:10FromGitter<dustinlacewell> Do any of you use an ML?
16:27:17*PMunch quit (Quit: Leaving)
16:27:46FromGitter<mratsim> @dustinlacewell see: https://github.com/nim-lang/Nim/pull/6404
16:28:02FromGitter<dustinlacewell> Also, with Nim you could probably write a macro much like haskell `do` or F# computation expressions
16:28:42FromGitter<mratsim> I used to use Haskell, it was actually the first language I learned seriously (besides scripting stuff in bash I mean)
16:29:05*Arrrr quit (Ping timeout: 240 seconds)
16:29:38FromGitter<dustinlacewell> lmfao at “flatMap"
16:30:10FromGitter<dustinlacewell> smh
16:30:37FromGitter<mratsim> and you can already custom define `>>=` `<$>` `<*>` but due to the arrows and the = it might have strange operator precedence
16:30:58FromGitter<dustinlacewell> “nim is not haskell” because >>= comes from haskell
16:31:01FromGitter<dustinlacewell> lol
16:31:07*kalkin- joined #nim
16:31:12kalkin-hi
16:31:32FromGitter<zacharycarter> hello
16:31:34kalkin-How do I call a function by it's string name
16:32:09kalkin-something like "foo"()
16:32:17FromGitter<mratsim> use a table that maps string -> function? or use a template/macro
16:32:44FromGitter<andreaferretti> do you know the string name at compile time or is it generated at runtime?
16:33:00FromGitter<andreaferretti> maybe it would be easier if you zoom out a bit and explain the context
16:33:12FromGitter<andreaferretti> where you would use this
16:33:42kalkin-andreaferreti: Ok I will explain a context
16:35:07kalkin-I have a macro which receives a string and constructs an enum type from values in the string. For each enum I want to create a "prototype" procedure which is called formatEnumChildName when not defined formatChildName
16:35:48FromGitter<dustinlacewell> > = comes from ML, https://gist.github.com/dustinlacewell-wk/ec470596425b1a559e6815ac9bffe74a#file-2-1-lock-and-acquire-fsx-L61
16:36:00FromGitter<mratsim> to you actually need to pass a string? you can use identifier construction
16:36:03FromGitter<dustinlacewell> and makes monad code extremely concise and expressive (assuming you know what monads are)
16:36:18FromGitter<mratsim> https://nim-lang.org/docs/manual.html#templates-identifier-construction
16:37:10*Yardanico_ joined #nim
16:37:16FromGitter<dustinlacewell> That’s an implementation of a single-item buffer with two binary semaphores and wait counter.
16:37:34*Trustable joined #nim
16:37:43FromGitter<dustinlacewell> In F# with the concurrent-ml lib Hopac. imagine “flatMap” everywhere
16:37:59kalkin-I've got the macro working, the only thing I'm missing is the check for if a function named "foo" is defined
16:38:57kalkin-dutinlacewell: thanks will try
16:39:27FromGitter<dustinlacewell> Kalkin, sorry I wasn’t address you, just rambling from the previous discussion.
16:40:04kalkin-dustinlacewell: np I see. I just wanted to ask how do I pass the string if the identifier construction needs an untyped value
16:40:06kalkin-:)
16:41:05*Yardanico quit (Ping timeout: 248 seconds)
16:41:23kalkin-ohh sorry for mixup I wanted to address mratsim. I hadn't enough sleep this night
16:41:38FromGitter<dustinlacewell> I know that feeling. :)
16:42:30FromGitter<mratsim> @kalkin, I don’t know but I would look at “ident” there might be something to transform string to identifiers
16:43:44FromGitter<dustinlacewell> Anyway regarding “nim is not haskell” there are reasons why infix operators are used. Imagine `add <!> (Some 2) <*> (Some 3)` see the resemblence to `add 2 3`?
16:45:55FromGitter<dustinlacewell> `Option.apply (Option.map add (Some 2)) 3` is not quite as nice
16:46:55*Vladar joined #nim
16:47:39*Zardos joined #nim
16:49:11ZardosToday I'm feeling totally silly. Can someone help me? Where is the link/button on the nim-forum to open a new topic?
16:49:33FromGitter<zacharycarter> Zardos: are you logged in?
16:49:48ZardosAhh!
16:50:01FromGitter<zacharycarter> once you are - https://forum.nim-lang.org/newthread
16:50:10ZardosProbably not! I confirmed the email and thought iM logged in...
16:50:33ZardosThanks!
16:53:30FromGitter<RedBeard0531> @andreaferretti I guess I see it the other way. I see the generic cases that are made easier by collapsing, such as `proc addHandlersA,B (handlers: varargs[A->Future[B], lift])` which can take either `A->B` or `A -> Future[B]` and treats them the same. Generic code can just use `await ensureFuture(futureOrNot)` and that will work if regardless of whether futureOrNot is a future[T] or a T
16:54:24FromGitter<mratsim> you can put a `when B is Future` in your proc though
16:54:28FromGitter<alehander42> is there any flag for the compiler to show error traces or errors with the lines as context ?
16:54:54FromGitter<mratsim> @alehander42 you mean linedirs interleaved with stacktraces?
16:55:02FromGitter<zacharycarter> @alehander42 - https://nim-lang.org/docs/nimc.html#additional-features-linedir-option
16:55:56FromGitter<mratsim> probably `—embedsrc`
16:55:57FromGitter<RedBeard0531> I'm also unsure if there is a meaningful semantic difference between Future[T] and Future[Future[T]]. Both seem to grant you the ability to wait for a T as some future time. Is there anything useful you can do with a F[F[T]] that you can't do with F[T]? (Generally curious, as I said, I'm still exploring the design space)
16:56:43FromGitter<alehander42> @zacharycarter I know about linedir, I mean the error output of the compiler
16:56:47FromGitter<alehander42> eg
16:56:49FromGitter<mratsim> `—embedsrc`: embeds the original source code as comments in the generated output
16:56:50FromGitter<alehander42> ```code paste, see link``` [https://gitter.im/nim-lang/Nim?at=5a3944d10163b0281056eac2]
16:57:00FromGitter<alehander42> ahh nice I'll take a look @mratsim
16:57:17FromGitter<RedBeard0531> Isn't ^4 an index error there?
16:57:21FromGitter<alehander42> ah, no, I know that option too
16:57:44FromGitter<alehander42> maybe, I was demonstrating what I want to see from the output :D not the error
16:58:33FromGitter<alehander42> (obviously it is, It was another string, that's what happens when I try to tweak my examples haha)
16:59:58FromGitter<alehander42> @RedBeard0531 you can wait for a function that depending on arg, returns another Future, that you can wait to in a different place in your function
17:01:03*Arrrr joined #nim
17:01:47*floppydh quit (Quit: WeeChat 2.0)
17:03:17FromGitter<RedBeard0531> @alehander42 But can't you already do that with a Future[T]?
17:04:30FromGitter<alehander42> yes, but this "first function" might wait for a query to a server and based on the result to return a different future
17:04:36FromGitter<alehander42> so its type would be Future[Future[A]]
17:05:10FromGitter<RedBeard0531> Sure, but if it just collapsed to F[A], what would that prevent you from doing?
17:05:14Araqflattening types never works IME.
17:05:48FromGitter<alehander42> you wouldn't be able to await it
17:05:55FromGitter<RedBeard0531> It still seems like you can't really do anything with the inner Future[A] other than wait for it
17:06:03Araqwhen I await an F[F[A]] I get F[A] if I await that I get A
17:06:18FromGitter<alehander42> exactly
17:06:40Araqoption[option[T]] is not the same as option[T] either
17:06:55FromGitter<alehander42> yes but you can wait for it after two lines, not immediately
17:07:12FromGitter<alehander42> which might make sense for some cases
17:08:26FromGitter<RedBeard0531> sure, but what is the point of awaiting to get an F[A], since you can't do anything with it. Especially since you can convert any F[F[T]] to an F[T] without waiting at all. If anything, it seems like awaiting an F[F[A]] represents an bad idea
17:09:09FromGitter<RedBeard0531> Agree about optional[optional[A]] btw. Future[Future[A]] just feels *different* for some reason
17:09:35FromGitter<alehander42> @Araq on verbosity:2 the line of error is shown with `^` under the column, would it be hard to generalize that for multiline errors? (e.g. instantiation from here) and to make an additional flag for it(`--showErrorContext`) ?
17:09:53FromGitter<alehander42> I can do it even with some external script I guess
17:11:02FromGitter<RedBeard0531> @alehander42 What are the actual cases where that is useful? I don't think the the inner future will wait for you to await the outer future to start running.
17:11:02FromGitter<alehander42> @RedBeard0531 you can't convert `Future[Future[T]]` to `Future[T]` without waiting?
17:12:06FromGitter<alehander42> the inner future wouldn't even start running before it's returned from the outer future
17:12:24Araqalehander42: there is already a switch for that but just use an IDE
17:15:12FromGitter<alehander42> what's the switch ?
17:15:28FromGitter<RedBeard0531> it won't start running until it is returned by the inner func, but I think that will happen even if no-one is awaiting either Future (although I'm somewhat unclear on this detail and I think it depends on the details of the specific event loop you are using)
17:21:30FromGitter<dustinlacewell> `*<Araq>* when I await an F[F[A]] I get F[A] if I await that I get A` +1
17:23:21FromGitter<dustinlacewell> @RedBeard0531 When you have a P[P[A]], the inner Promise doesn’t even exist. The value is a promise to produce a promise of A
17:23:43FromGitter<dustinlacewell> Imagine if instead of promise’s, you just had procs
17:24:19FromGitter<dustinlacewell> So instead of P you have F[‘a] such that F = () -> ‘a
17:24:32FromGitter<dustinlacewell> the ‘a doesn’t exist until returned by the function
17:24:48FromGitter<dustinlacewell> So if you have an F[F[int]], for example, the inner function doesn’t even exist
17:25:05FromGitter<dustinlacewell> You can’t “collapse” it, you have to call the outer proc, to produce the inner proc.
17:25:41FromGitter<dustinlacewell> The outer promise needs to be waited on. Maybe it fetches a remote resource and uses the result to produce a new promise.
17:26:37FromGitter<dustinlacewell> Also possible I have no idea what you’re really talking about and I missed the point, if so sorry.
17:27:17FromGitter<RedBeard0531> @alehander42 this unwraps F[F[T]] to F[T] without waiting: https://gist.github.com/RedBeard0531/7e0895fbe73a0d13fb4e74ac854e6ce1
17:28:18FromGitter<dustinlacewell> Unwrap is spelled “return"
17:28:25FromGitter<RedBeard0531> not in nim :)
17:28:41FromGitter<dustinlacewell> In the general computing science of monads
17:28:57FromGitter<RedBeard0531> Also, isn't return the other direction? T -> Monad[T]
17:29:08FromGitter<dustinlacewell> Which is spelled “bind”
17:29:16FromGitter<dustinlacewell> But
17:29:26FromGitter<dustinlacewell> “return” is a function of the signature M[T] -> T
17:29:39FromGitter<dustinlacewell> “bind” TAKES a function of the signature T -> M[T]
17:30:07FromGitter<dustinlacewell> return : M[T] -> T
17:30:33FromGitter<dustinlacewell> Bind: (T -> M[T]) M[T] -> M[T2]
17:31:37FromGitter<dustinlacewell> Bind “composes lifting functions"
17:31:58FromGitter<dustinlacewell> Lets say you have a bunch of mathematical operators that are `float -> float`
17:32:14FromGitter<dustinlacewell> But your division operator is `float -> Option<float>`
17:32:43FromGitter<dustinlacewell> If you have some formuala composing the operators, as soon as division is used, BOOM the whole thing explodes
17:32:58FromGitter<RedBeard0531> Unwrap is spelled "join" in haskell http://hackage.haskell.org/package/base-4.10.1.0/docs/Control-Monad.html#v:join
17:33:10FromGitter<dustinlacewell> The solution is to lift everything up into Option<float> space
17:33:41FromGitter<RedBeard0531> It also says that return is a -> m[a]: http://hackage.haskell.org/package/base-4.10.1.0/docs/Control-Monad.html#v:return
17:35:06FromGitter<alehander42> ok, you can write an unwrap which internally uses callbacks, but my point is, sometimes you might want to return an unwrapped future and wait for it in the well.. future
17:35:39FromGitter<dustinlacewell> @RedBeard0531 yeah you’re right got that backwards heh
17:36:17FromGitter<dustinlacewell> @alehander42 Yeah for controlled concurrency and stuff
17:37:47FromGitter<RedBeard0531> Can you give me a concrete example of where that is actually meaningful and useful?
17:37:54FromGitter<dustinlacewell> Yeah I was going to
17:38:02FromGitter<dustinlacewell> But you made me reflect on being fooled by return’s name
17:38:41FromGitter<dustinlacewell> But I was just thinking the way to remember properly is to remember that bind takes a function with return’s signature
17:38:44FromGitter<dustinlacewell> Anyway
17:38:51FromGitter<dustinlacewell> Back to the number example
17:39:01FromGitter<dustinlacewell> As soon as you do division, the computation explodes
17:39:21FromGitter<dustinlacewell> So the way you do this by using return, bind and map
17:39:43FromGitter<dustinlacewell> One way to think about map, like on lists, is that it takes a function and a list, and then applies the function to each element
17:39:50FromGitter<dustinlacewell> But there is another way to think about map
17:40:06FromGitter<dustinlacewell> I’ve forgotten Nim so forgive the ML syntax but
17:40:21FromGitter<dustinlacewell> List.map (fun x -> x * x) someList
17:40:32FromGitter<dustinlacewell> But if you think about partial application….
17:40:41FromGitter<dustinlacewell> List.map (fun x -> x * x)
17:40:59FromGitter<dustinlacewell> Then you can think of map that takes a function of ‘a -> ‘a and returns a function that is M[‘a] -> M[‘a]
17:41:20FromGitter<dustinlacewell> So now you can make all the rest of your numeric operators work with your division operator
17:41:34FromGitter<dustinlacewell> Since everything is working in Option<int> now
17:41:46*Arrrr quit (Ping timeout: 255 seconds)
17:41:51FromGitter<dustinlacewell> You can build map with return and bind
17:41:58FromGitter<dustinlacewell> if you think about it for a second or two
17:42:10FromGitter<RedBeard0531> I think that logic applies to the generic Monad case, but not to Future[T], since not awaiting the outer future doesn't prevent the inner one from running
17:42:32FromGitter<dustinlacewell> There is no inner one
17:42:39FromGitter<dustinlacewell> In your unwrap, you’re waiting the outer one
17:43:58FromGitter<RedBeard0531> unwrap waits on both. addDoubleWrapped only waits on the inner ones (technically it waits on synthetic futures that themselves wait on the inner ones)
17:45:03FromGitter<dustinlacewell> Sure but I’m not sure what your point is.
17:45:07FromGitter<alehander42> well usually if you write such a function, you wouldn't "start waiting" for the inner one in it, guaranteeing that the caller can start waiting whenever he wants (and in all cases, *after* he received the completed outer future)
17:45:12FromGitter<dustinlacewell> I think you have one, but I am not getting it
17:45:14*Arrrr joined #nim
17:47:45FromGitter<RedBeard0531> Basically my point is that Future[Future[T]] doesn't have meaningfully different sematics from a Future[T] (This is specifically talking about Future, not generic Monads).
17:48:00FromGitter<dustinlacewell> Future is a monad
17:48:44FromGitter<dustinlacewell> I would say
17:49:08FromGitter<dustinlacewell> The semantics of automatically unwrapping Future[Future[T]] to T in your program has no problems
17:49:23FromGitter<dustinlacewell> Rather than saying its a general property of Future's
17:49:27FromGitter<alehander42> it has, you can await for a list of other futures, that you can selectively await for in a next moment depending on conditions
17:49:32FromGitter<dustinlacewell> ^
17:49:54FromGitter<RedBeard0531> @alehander42 Are you basically saying that you can use the completion of the outer Future[T] as a sequence point to ensure a happens-before relationship from whatever completed it to some other operation in addition to the inner future?
17:50:23FromGitter<dustinlacewell> That’s just one thing
17:50:30FromGitter<alehander42> that might be a usecase, yes, I am just describing that it's completely possible to have semantic difference
17:50:44FromGitter<dustinlacewell> Maybe the costs associatied with waiting on the inner promise is totally different than the outer promise
17:50:51FromGitter<alehander42> now, I didn't have that as a real world usecase, but I am sure that somebody somewhere had
17:50:56FromGitter<dustinlacewell> And you need to control their concurrency seperately
17:51:12FromGitter<RedBeard0531> Interesting... I'll have to ponder that a bit. Lunch time in my TZ.
17:52:30*MJCaley quit (Quit: MJCaley)
17:54:17FromGitter<RedBeard0531> @dustinlacewell You can't use awaiting or not to control concurrency (with most Future implementations) since as soon as you dispatch work to create the Future it can complete on its own even if noone is waiting on it. Otherwise `let (aFut, bFut) = (readA(), readB()); let (a, b) = (await a, await b);` wouldn't allow a and b to execute concurrently. Note that `let (a, b) = (await readA(), await readB())` is serial
17:54:17FromGitter... rather than parallel.
17:55:04FromGitter<dustinlacewell> @RedBeard0531 exactly!
17:55:10FromGitter<dustinlacewell> That’s why there is no inner promise
17:55:44FromGitter<dustinlacewell> So having a P[P[T]] and just saying that you can treat that value as a P[T] or even T, just because you have a function that waits on them both, doesn’t mean that’s what you should do
17:55:50FromGitter<dustinlacewell> Or even can do
17:56:07FromGitter<dustinlacewell> Maybe you have different service rate limits for the endpoint on the outer promise, and inner promise?
17:56:43FromGitter<dustinlacewell> Wait
17:56:44FromGitter<dustinlacewell> What
17:57:35FromGitter<dustinlacewell> Oh I have been studying Concurrent ML too long..
17:58:54FromGitter<alehander42> :D
18:01:07*foo_ joined #nim
18:01:48*Yardanico_ is now known as Yardanico
18:03:27FromGitter<alehander42> @Araq if there is a forward declaration without a codegenDecl and an impl with codegenDecl, the code generator still doesn't get the constraint, is that by design?
18:10:31*Arrrr quit (Ping timeout: 260 seconds)
18:19:02FromGitter<RedBeard0531> For comparison, C++ future<T> from the concurrency TS went for having future<future<T>> be a distinct type that implicitly converts to plain future<T> http://en.cppreference.com/w/cpp/experimental/future/future (note the lack of "explicit" on overload 4). Not that c++ is necessarily the bast place to look for good design...
18:21:40FromGitter<RedBeard0531> (We are looking into building our own Future at work that integrates better with the rest of our system. Which is why I am so interested in these details)
18:34:59*claudiuinberlin quit (Quit: Textual IRC Client: www.textualapp.com)
18:44:21*Arrrr joined #nim
18:44:22*Arrrr quit (Changing host)
18:44:22*Arrrr joined #nim
18:48:08*nsf joined #nim
18:54:04*Ven`` joined #nim
18:54:35AraqRedbeard0532: I'm very interested in your makefile Nim stuff as a general Nim benchmark to optimize against
18:54:48Araqcan you share the input data?
18:55:50*Jesin joined #nim
19:00:00FromGitter<RedBeard0531> Sure. Right now it is just fed with **/*.d in my build dir, but I assume it'd be easier to consume in a single file?
19:01:17FromGitter<RedBeard0531> Also I went a bit crazy last night and used SSE4.2 pcmpistri to optimize the `while s[i] in plain_chars: inc i` hot loop
19:06:45FromGitter<RedBeard0531> https://gist.github.com/RedBeard0531/2ee8a18ba3fb354557f7f113ca39d732 It would be very cool if nim could do that automatically in parseWhile/parseUntil
19:10:01Araqindeed
19:15:41*NimBot joined #nim
19:21:06FromGitter<RedBeard0531> I had forgotten that SIDD_CMP_RANGES exists. pcmpXstrX is really a swiss-army knife of byte/short processing. And proof that intel considers logic gates to be basically free.
19:22:11*claudiuinberlin joined #nim
19:25:48*jjido joined #nim
19:34:38*Elronnd quit (Ping timeout: 276 seconds)
19:43:26FromGitter<RedBeard0531> @Araq Its a total of 83MB of data, but because it is *extremely* redundant, it is just 320k in a tar.xz. How do you want me to send it to you?
19:45:41*jjido quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
19:50:26*xkapastel joined #nim
20:02:25*Vladar quit (Quit: Leaving)
20:07:16*yglukhov quit (Remote host closed the connection)
20:11:50*Jesin quit (Quit: Leaving)
20:12:03*yglukhov joined #nim
20:16:31*yglukhov quit (Ping timeout: 260 seconds)
20:19:23Araqhuh ...
20:19:37AraqI don't want to add 83 MB to the repo
20:19:46FromGitter<RedBeard0531> yeah, didn't think so :)
20:20:11Araqand you extract something representative out of this?
20:22:43FromGitter<RedBeard0531> what do you mean? This is the output of g++ <other flags> -MMD when running our build system. Each .d file lists every file included by the corresponding .cpp file. The build system needs to read this data to know when a .o should be rebuilt if a header was edited.
20:23:15*BitPuffin|osx quit (Ping timeout: 256 seconds)
20:23:45*Ven`` quit (Read error: Connection reset by peer)
20:24:49*Ven`` joined #nim
20:24:56*Arrrr quit (Ping timeout: 272 seconds)
20:25:16FromGitter<RedBeard0531> See https://ninja-build.org/manual.html#ref_headers. I'm writing on a ninja-compatible build system in nim for "fun".
20:26:49*Jesin joined #nim
20:27:00*zahary_ joined #nim
20:27:13*Arrrr joined #nim
20:27:13*Arrrr quit (Changing host)
20:27:14*Arrrr joined #nim
20:28:14Araqthe program only processes the .d files, right?
20:29:33FromGitter<RedBeard0531> The benchmark i posted yesterday does. That is currently my POC to see how well it works. I haven't integrated it into the build system yet.
20:30:08Araqyeah so just select a couple of .d files you consider representative
20:30:32Araqor must N files so that they take up roughly 1 MB, dunno
20:30:41Araq*just
20:40:52FromGitter<RedBeard0531> These are the biggest and smallest files: https://gist.github.com/RedBeard0531/03e0d6e9c85b76ab112bc585c5603bd7. You can probable get a reasonable benchmark by just feeding it either/both repeatedly.
20:45:06*jjido joined #nim
20:45:43FromGitter<RedBeard0531> btw, it quickly ended up bottlenecked on the byte-at-a-time hash(string) function. Are there plans to switch to something faster like xxhash or murmur? I saw https://github.com/nim-lang/Nim/issues/6136, but it looks like it stalled out (to an outside observer)
20:46:12Araqdefinitely something we will do
20:46:30Araqit needs to be evaluable at compile-time though
20:53:07*marenz__ joined #nim
21:03:47*Yardanico quit (Read error: Connection reset by peer)
21:07:44*Arrrr quit (Read error: Connection reset by peer)
21:10:23*vlad1777d joined #nim
21:10:43FromGitter<RedBeard0531> The tables/critbits benchmark I posted earlier was fed by either /usr/share/dict/words (100k short string inputs) or `strings 500_meg_exe_with_dwarf_info` for 4.5 million moderately-sized strings (avg 67 byte)
21:16:57*nsf quit (Quit: WeeChat 1.9.1)
21:29:45AraqRedBeard0531: 10.10s vs 10.45s
21:29:58AraqM&S vs refc
21:30:12Araqso yeah, M&S wins but not by much
21:32:29Araqpeak memory: 4.7MB vs 1.504MB
21:32:43FromGitter<RedBeard0531> I wonder why I'm seeing such different results...
21:33:18Araqbut it uses too few memory to say anything really, M&S doesn't even kick in before 4MB is used
21:33:55FromGitter<RedBeard0531> what is peak mem with --gc:none?
21:35:03*Trustable quit (Remote host closed the connection)
21:35:20Araq[GC] stack scans: 9315 -- number of collections for refc
21:35:34Araq[GC] collections: 260 -- number of collections for M&S
21:35:54Araqrefc is scheduled aggressively
21:36:05Araqmaybe too aggressive
21:36:50Araq--gc:none
21:36:53FromGitter<RedBeard0531> hmm. Does that imply that it is likely the cycle detector that is the problem not the actual ref-counting? (I haven't really looked at the internals of the gc yet)
21:37:03AraqCPU time [s]10.39 peak memory: 997.547MiB
21:37:18Araqthe cycle collector is never called in refc
21:37:48Araqrefc is more aggressive and cuts down memory usage quite a bit, *shrug*
21:38:01Araqit's just some memory vs speed tradeoff
21:38:29FromGitter<RedBeard0531> huh? https://nim-lang.org/docs/gc.html says that the default is "Deferred Reference Counting with cycle detection". Is that out of date?
21:38:41*skrylar joined #nim
21:38:58Araqno that is correct but for this program it never runs the cycle collector as it doesn't have to
21:38:59GitDisc<NopeDK> How would you access an "extern struct structType structName" in a header file? I can "importc" the struct itself but how do you access the variable from Nim? (How to do "structName.field" on an extern)
21:41:32Araqvar stuff {.importc: "_stuff", header: "<header.h>".}: cint
21:42:33Araqredbeard0531 with tiny modifications I can make it GC free though, let's see
21:42:46FromGitter<RedBeard0531> Your times are ~50x higher than mine. How many inputs are you giving it? Which OS/Compiler/Compiler flags?
21:43:18GitDisc<NopeDK> Wow, that simple... Thanks Araq
21:43:29AraqI run https://gist.github.com/RedBeard0531/17c08ac262d8c4f62b130cee8dd50240 on the 2 files you gave me 1000 times
21:44:10AraqWindows 10, 64 bit, Intel i7 4.36GHz
21:44:17FromGitter<RedBeard0531> My understanding is that FS metadata queries on windows are dog slow, so that could be the problem
21:44:26FromGitter<RedBeard0531> Try commenting out the last two lines
21:44:38*sz0 joined #nim
21:45:11FromGitter<RedBeard0531> If you comment out 79 as well it will remove the hash table from the equation. Not sure if you want that in there or not.
21:46:10Araqomg wtf
21:46:21FromGitter<RedBeard0531> yup :)
21:46:29Araqif input.fileExists:
21:46:29Araq discard input.getLastModificationTime()
21:46:39Araqtakes up **all** the time?
21:46:53Araqthat cannot be true
21:46:57FromGitter<RedBeard0531> I don't think windows caches metadata
21:47:05FromGitter<RedBeard0531> so it always goes to disk
21:47:22Araqwow what a piece of crap lol
21:47:25FromGitter<RedBeard0531> either that or it just adds a sleepMillis(1) for fun...
21:47:43Araqprobably os.nim should implement its own caching layer then
21:49:11Araqok, now refc is significantly slower indeed
21:49:13FromGitter<RedBeard0531> not too many programs hammer fileexists/modificationTime. Even fewer do it repeatedly on the same file. Those that do probably have their own caching strategy already.
21:50:06Araqwell I don't reuse the 'seen' cache in my for loop
21:51:18*Ven`` quit (Read error: Connection reset by peer)
21:51:41FromGitter<RedBeard0531> on linux that cache makes almost no difference (but better models how nimja will need to work for other reasons). On windows it appears to be critical.
21:52:18*Ven`` joined #nim
21:55:48Araqinteresting
21:56:05AraqI managed to create a program where both GCs run 199 times
21:56:48AraqM&S is 25% faster still
21:57:17Araqpeak memory is identical
21:57:23FromGitter<Varriount> Araq: Have you considered the fact that getLastModificationTime implementation is slow?
21:57:39Araqread the logs we're past that point
21:58:39FromGitter<Varriount> Even so, the implementation is rather ineffecient
21:59:05*CcxWrk quit (Quit: ZNC 1.6.4 - http://znc.in)
22:00:48*CcxWrk joined #nim
22:01:14Araqwell I cannot make refc beat M&S for this program
22:01:17Araq:-)
22:02:00FromGitter<RedBeard0531> Did it at least lead to any interesting optimizations to refc?
22:02:39*claudiuinberlin quit (Quit: Textual IRC Client: www.textualapp.com)
22:04:04FromGitter<data-man> @Araq: ⏎ ⏎ > Windows 10, 64 bit ⏎ Have you tried a truecolored console?  [https://gitter.im/nim-lang/Nim?at=5a398cd31a4e6c82232c6ed6]
22:04:28FromGitter<data-man> Have you tried a truecolored console? :-)
22:06:34FromGitter<RedBeard0531> The c++ dev in me is very disturbed that m&s is beating refc, but I can't argue with the evidence
22:07:07FromGitter<RedBeard0531> And nim doesn't even use atomic refcounts right? since they are all supposed to be thread-local
22:07:16GitDisc<NopeDK> Wups, I miswrote earlier. The actual initialization of the struct is somewhere else. What I am trying to do is create a shared lib that is dynamically included by an existing binary and from that library I would like to access a struct that is initialized by the main binary. All existing libraries are pure C and just use "extern". The header is used to get the types and functions called by the bin <message clipped>
22:08:22Araqredbeard0531: They are thread local but most of the GC tuning was done a couple of years ago and hasn't been touched
22:12:32Araqthere is enough logic in the GC though to make it "not obviously refcounting"
22:12:55Araqthat said, it's widely known that RC loses for throughput
22:16:32Araqbtw it also loses for latency ... *cough* non interruptible recursive freeing of datastructures
22:16:43*ajusa joined #nim
22:17:09ajusaHey guys, I am wondering how to get Nim to output to a single c file that could be compiled by gcc and ran
22:17:56ajusaI don't have control over the way the code is compiled, as it is submitted to an external service
22:18:25ajusaThere is also a 100kb file size limit imposed
22:19:43*ajusa quit (Client Quit)
22:21:11FromGitter<Varriount> ajusa: Why?
22:21:12FromGitter<Ajusa> with nim -c -d:release c main.nim I can see the C code being generated in nimcache, but the C file references a nimbase.h
22:21:37FromGitter<Ajusa> It is for a programming competition :P
22:22:48FromGitter<Ajusa> I tried copy pasting the contents of nimbase.h into where the include statement was, but that didn't work. I can submit one C file, to be compiled externally, up to 100kb in size
22:23:05FromGitter<Varriount> Arham: Hm. Nim usually splits each module into its own C/object file. That way it doesn't have to recompile all modules.
22:28:40FromGitter<Ajusa> Yeah... is there any hope for me? I also tried looking into other languages, but no such luck from what I could see
22:30:30FromGitter<RedBeard0531> I'm sure there are minifiers for c. The nim output isn't good for size limits since it adds a hash to most identifiers.
22:32:22FromGitter<Varriount> @Ajusa You could have a C file which mmaps a byte array into memory
22:32:38FromGitter<Varriount> then executes the mmap'd data
22:33:06Araqjust concat the .c files and nimbase.h
22:33:26Araqand remove duplicate inline procs :P
22:35:51*solitudesf quit (Ping timeout: 256 seconds)
22:37:31*Ven`` quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
22:38:02FromGitter<Ajusa> I tried concatinating them
22:38:41FromGitter<Ajusa> @Varriount so basically running the exe in the c file itself?
22:38:49FromGitter<Varriount> Essentially.
22:38:57FromGitter<Varriount> You would have to know the target system though.
22:39:41FromGitter<Varriount> That's what most program loaders do anyway. Read a file, mmap it into the process space, find where main() is, then call it.
22:40:05FromGitter<Ajusa> that seems like the easiest solution... hmm
22:40:06GitDisc<NopeDK> Wow I feel stupid... My problem was as simple as just doing importc on the variable declaration for it to extern it automatically.
22:40:47FromGitter<Ajusa> only the generated exe for hello world is 200kb, so I would break the filesize limit
22:41:27FromGitter<Varriount> @Ajusa Try --opt:size and use gzip?
22:42:19FromGitter<Varriount> @Ajusa And use this: https://hookrace.net/blog/nim-binary-size/
22:42:20FromGitter<Ajusa> 68kb with that, and if I used gzip wouldn't I have to write code to unzip the file, taking up more space?
22:44:03FromGitter<Ajusa> I am going through that link, ought to be interesting :D
22:46:51FromGitter<Ajusa> They might disqualify me if I don't show my actual code lol. I will try what Araq said as well, and report back once I figure out the best way
22:56:50*HM left #nim ("Leaving")
22:59:04*MJCaley joined #nim
23:08:21*jjido quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
23:13:51FromGitter<Varriount> @Araq is that a vtref/vtptr PR that I see?! :D