| 00:38:42 | FromDiscord | <threefour> Ah |
| 01:15:38 | FromDiscord | <Robyn [She/Her]> I'm gonna have to make a few utilities to convert an AST into the most simplest form it can be T-T |
| 03:11:24 | * | rockcavera quit (Remote host closed the connection) |
| 04:12:26 | * | alexdaguy joined #nim |
| 04:22:01 | * | ntat joined #nim |
| 05:58:25 | * | amadaluzia quit (Ping timeout: 276 seconds) |
| 06:03:41 | * | amadaluzia joined #nim |
| 06:04:17 | * | amadaluzia quit (Remote host closed the connection) |
| 06:16:14 | * | SchweinDeBurg quit (Quit: WeeChat 4.7.0-dev) |
| 06:16:38 | * | SchweinDeBurg joined #nim |
| 07:12:51 | * | andy-turner joined #nim |
| 07:20:36 | * | ntat quit (Quit: leaving) |
| 07:55:17 | * | ntat joined #nim |
| 08:12:05 | * | ntat quit (Read error: Connection reset by peer) |
| 08:12:21 | * | ntat joined #nim |
| 08:15:37 | * | ntat quit (Read error: Connection reset by peer) |
| 08:53:59 | * | ntat joined #nim |
| 09:14:28 | FromDiscord | <k4xz2v> nim 🐀 virus? |
| 09:18:16 | * | xet7 quit (Remote host closed the connection) |
| 09:19:58 | * | ntat quit (Read error: Connection reset by peer) |
| 09:21:37 | FromDiscord | <mratsim> In reply to @battery.acid.bubblegum "I'm gonna have to": beware, soon you'll be a compiler dev |
| 09:23:04 | FromDiscord | <nnsee> In reply to @k4xz2v "nim 🐀 virus?": huh? |
| 09:23:11 | FromDiscord | <nnsee> nim is not a virus if that's what you're asking |
| 09:23:13 | FromDiscord | <nnsee> it's a programming language |
| 09:24:06 | * | xet7 joined #nim |
| 09:31:06 | * | xet7 quit (Remote host closed the connection) |
| 09:42:50 | FromDiscord | <k4xz2v> anti virus delete my stuff 😤 |
| 09:46:44 | FromDiscord | <nnsee> In reply to @k4xz2v "anti virus delete my": that is a known issue with some antivirus software, it's false positives. what antivirus are you using? |
| 09:46:53 | * | andy-turner quit (Quit: Leaving) |
| 09:48:26 | * | ntat joined #nim |
| 10:00:37 | FromDiscord | <nocturn9x> does anyone experience like 3x slower performance with `-d:release` as opposed to `-d:danger`? |
| 10:00:54 | FromDiscord | <nocturn9x> My multithreaded data generation code is way, waay slower in release vs danger mode |
| 10:01:00 | FromDiscord | <nocturn9x> can't just be `--checks:on`, tried that |
| 10:01:56 | FromDiscord | <nocturn9x> kind of silly that the safer option is almost unusably slow |
| 10:15:41 | * | ntat quit (Read error: Connection reset by peer) |
| 10:38:10 | * | ntat joined #nim |
| 10:38:44 | Amun-Ra | why? |
| 10:39:01 | Amun-Ra | checking array bounds, etc costs cpu cycles |
| 11:10:04 | FromDiscord | <nocturn9x> what |
| 11:10:11 | FromDiscord | <nocturn9x> I already said I tried just with `--checks:on` |
| 11:10:15 | FromDiscord | <nocturn9x> the performance impact is not as significant+ |
| 11:10:16 | FromDiscord | <nocturn9x> (edit) "significant+" => "significant" |
| 11:10:18 | FromDiscord | <nocturn9x> not even close |
| 11:14:06 | Amun-Ra | "etc" |
| 11:14:26 | Amun-Ra | compare C sources in both cases |
| 11:17:27 | FromDiscord | <Robyn [She/Her]> In reply to @mratsim "beware, soon you'll be": lolol |
| 11:28:34 | * | ntat quit (Read error: Connection reset by peer) |
| 11:34:03 | * | ntat joined #nim |
| 11:45:46 | * | vsantana joined #nim |
| 12:01:43 | * | ntat quit (Quit: leaving) |
| 12:03:06 | * | vsantana quit (Quit: vsantana) |
| 12:21:37 | FromDiscord | <nocturn9x> In reply to @mratsim "beware, soon you'll be": I've went down that rabbithole |
| 12:21:37 | FromDiscord | <nocturn9x> can be fun |
| 12:32:07 | FromDiscord | <k4xz2v> In reply to @nnsee "that is a known": idk xd i have many |
| 12:45:47 | FromDiscord | <nnsee> ... many? why? you know only one can be active at the same time? |
| 12:45:59 | FromDiscord | <nnsee> the other ones are just sitting there doing nothing |
| 12:47:06 | FromDiscord | <k4xz2v> some times it crash or sometimes virus escapes |
| 12:51:16 | FromDiscord | <nnsee> In reply to @k4xz2v "some times it crash": even if that were true, that doesn't matter, since the other antiviruses will not be running regardless |
| 12:51:31 | FromDiscord | <nnsee> trust me when I say that windows defender is all you need and you're better off uninstalling everything else |
| 12:51:37 | FromDiscord | <k4xz2v> h? okay i wil ldelete the virus |
| 12:53:11 | FromDiscord | <devlop_gaming> Is there a data type that will allow me to store auto? Or is it not possible? |
| 12:54:02 | Amun-Ra | auto as in any type? |
| 12:54:28 | FromDiscord | <nnsee> In reply to @devlop_gaming "Is there a data": auto isn't a "real" type, it gets resolved to the actual type during compile time |
| 12:54:48 | FromDiscord | <nnsee> although i guess it depends on what exactly you mean by store |
| 12:56:32 | FromDiscord | <devlop_gaming> I was trying to see if I could store any type inside a tuple and change it's value to another type like how you can in a python list |
| 12:58:44 | FromDiscord | <nnsee> that does not work in nim |
| 12:59:06 | FromDiscord | <Laylie> here are some ways to emulate it\: https://internet-of-tomohiro.pages.dev/nim/faq.en#type-how-to-store-different-types-in-seqqmark |
| 12:59:51 | FromDiscord | <fabric.input_output> you could use object variants to have values of a fixed set of types. only one at a time per type tho |
| 13:02:11 | Amun-Ra | and only one type per instance |
| 13:23:41 | * | amadaluzia joined #nim |
| 13:48:23 | * | ntat joined #nim |
| 14:07:24 | * | przmk_ joined #nim |
| 14:08:44 | * | przmk quit (Ping timeout: 245 seconds) |
| 14:08:44 | * | przmk_ is now known as przmk |
| 14:35:15 | FromDiscord | <mratsim> In reply to @nocturn9x "does anyone experience like": stacktraces? |
| 14:35:27 | FromDiscord | <nocturn9x> that is probably it |
| 14:36:07 | FromDiscord | <mratsim> it rings a bell with openmp at the very least, when it's not just crashing (but that was with the ref gc) |
| 15:08:53 | * | MrGoblins joined #nim |
| 15:10:37 | FromDiscord | <nervecenter> In reply to @nocturn9x "does anyone experience like": In case anyone is using `-d:danger` with Datamancer, I recommend against it. I do a huge number of calculations on sequences or concatenated dataframes, and especially floating-point math comes out wonky and inconsistent in danger mode. Always keep checks on if you're dependent on accuracy.↵↵As for the performance difference, no idea. |
| 15:10:49 | FromDiscord | <nervecenter> (edit) "In reply to @nocturn9x "does anyone experience like": In case anyone is using `-d:danger` with Datamancer, I recommend against it. I do a huge number of calculations on sequences ... or" added "of" |
| 16:13:55 | FromDiscord | <anuke> In what way does -d:danger affect floating point math? That sounds serious. |
| 16:19:43 | FromDiscord | <Laylie> indeed, does danger change how the float math works or is it just blowing through assertions? we need to know.. |
| 16:23:13 | FromDiscord | <anuke> I didn't think speed flags affected math results (outside of integer overflows or -ffast-math) |
| 16:28:10 | FromDiscord | <Elegantbeef> They shouldn't either a compiler bug or someome is holding it wrong |
| 16:46:19 | * | alexdaguy quit (Quit: ded) |
| 16:49:49 | FromDiscord | <mratsim> In reply to @nervecenter "In case anyone is": I checked just in case arraymancer is defining -ffast-math somewhere but it doesn't seem to be the case. |
| 16:50:29 | FromDiscord | <mratsim> In reply to @nervecenter "In case anyone is": now there is a thing called catastrophic cancellation and in that case it's not datamancer. |
| 16:51:46 | FromDiscord | <mratsim> https://en.wikipedia.org/wiki/Catastrophic_cancellation |
| 16:52:36 | FromDiscord | <mratsim> You need to use catastrophic cancellation robust algorithm like Welford's algorithm for mean/variance/stddev or Kahan sum for summation to avoid it |
| 16:52:49 | FromDiscord | <Elegantbeef> I also don't see any references of `danger` `checks` or anything else that would cause different logic |
| 16:53:31 | FromDiscord | <mratsim> if you do 1000000 + 999999 + 999998 + ... + 1, you would get a different result from 1 + ...+ 999998 + 999999 + 1000000 |
| 16:54:30 | FromDiscord | <mratsim> the inconsistency might be multithreading + the fact that addition in floating point is not associative?↵↵i.e. (a+b)+c != a+(b+c)? |
| 16:55:01 | FromDiscord | <mratsim> in that case with -d:danger, there is less overhead, so code ordering becomes more random? |
| 16:57:28 | FromDiscord | <Elegantbeef> My dumbass wants to say isn't it more "not commutative" since it's about the order the terms appear 😄 |
| 17:14:40 | FromDiscord | <nervecenter> In reply to @mratsim "if you do 1000000": This was a while ago and I had no way to even start debugging, so I stuck with `-d:release`. I might be able to produce a sample scenario that replicates what I'm trying to do and the inconsistency I saw. |
| 17:15:00 | FromDiscord | <nervecenter> I can submit it as an issue, but currently I have other work priorities. |
| 17:17:10 | FromDiscord | <nervecenter> Also, the calculation I remember it having a clearly visible effect on was one where I extracted series data to `seq[float]` before dropping the results in new DataFrames, so if it's not a Datamancer issue I'll have to formally apologize. |
| 17:17:16 | FromDiscord | <nervecenter> (edit) "extracted" => "extract" |
| 17:17:30 | FromDiscord | <nervecenter> (edit) "Also, the calculation I remember it having a clearly visible effect on was one where I extract series data to `seq[float]` ... before" added "with `toSeq1D()`" |
| 17:44:46 | * | xet7 joined #nim |
| 18:27:54 | * | xet7 quit (Quit: Leaving) |
| 18:38:45 | * | xet7 joined #nim |
| 19:28:27 | * | GnuYawk quit (Quit: The Lounge - https://thelounge.chat) |
| 19:28:50 | * | GnuYawk joined #nim |
| 20:07:14 | * | ntat quit (Quit: leaving) |
| 22:20:00 | FromDiscord | <k4xz2v> help if i use a library and only use 1 function, does the compiler only add that function or does it add the whole library, if the function only depends on itself |
| 22:24:39 | FromDiscord | <leorize> only that fn |
| 22:49:27 | FromDiscord | <fabric.input_output> @pmunch is there a way to sort of import a C macro with futhark like how in nim you usually declare it as a proc with importc? |
| 22:49:36 | FromDiscord | <fabric.input_output> I can't just do the nim proc thing |
| 22:49:46 | FromDiscord | <fabric.input_output> because it takes all the other header |
| 22:49:49 | FromDiscord | <fabric.input_output> and things get messed up |
| 22:51:37 | FromDiscord | <heysokam> In reply to @fabric.input_output "<@392962235737047041> is there a": You might need to wrap that manually↵I think I asked the same some time ago, and the answer is that there was no way of differentiating between regular macros and function-like macros |
| 22:51:51 | FromDiscord | <fabric.input_output> oh damn |
| 22:51:57 | FromDiscord | <fabric.input_output> I guess I'm just copy pasting it |
| 22:52:17 | FromDiscord | <heysokam> why not create the import for that manually as a function? |
| 22:52:19 | FromDiscord | <fabric.input_output> don't think vulkan is going to change the `VK_MAKE_VERSION` anytime soon |
| 22:52:48 | FromDiscord | <fabric.input_output> In reply to @heysokam "why not create the": because I need to specify a header and nim includes that header which messes up with what futhark includes |
| 22:53:26 | FromDiscord | <heysokam> also, if you are working with vulkan, we have a lot of progress made at https://github.com/DanielBelmes/VulkanNim and could use some help/contributions |
| 22:53:50 | FromDiscord | <heysokam> iirc, last thing that @Daniel Belmes told me is that the bindings are already usable |
| 22:55:13 | FromDiscord | <fabric.input_output> why not just futhark it tho |
| 22:56:08 | FromDiscord | <heysokam> its also an option. but futhark doesn't provide a way to create constructors or provide default values for structs |
| 22:57:16 | FromDiscord | <heysokam> futhark gives you a very c-like outcome. VulkanNim's goal is to nim-mify the bindings as much as possible (although currently does not do that 100%, but thats the goal) |
| 22:57:30 | FromDiscord | <fabric.input_output> the stuff seems pretty c-like |
| 22:57:44 | FromDiscord | <fabric.input_output> I'm not sure how it's nimmified |
| 22:57:57 | FromDiscord | <heysokam> > although currently does not do that 100% |
| 22:58:07 | FromDiscord | <heysokam> > and could use some help/contributions |
| 22:58:23 | FromDiscord | <fabric.input_output> idk if I'm fit for this I know 0 vulkan |
| 22:58:49 | FromDiscord | <heysokam> kk, sry for the idea |
| 22:59:46 | FromDiscord | <fabric.input_output> I'm just trying to ditch opengl atm, maybe if I use it for a while and learn how it works I'll contribute |
| 23:00:30 | FromDiscord | <heysokam> wrapping wgpu might take you further+faster |
| 23:00:43 | FromDiscord | <heysokam> vulkan is quite heavy to learn |
| 23:00:55 | FromDiscord | <fabric.input_output> yeah but I've heard it has deadlock issues and whatnot |
| 23:01:12 | FromDiscord | <fabric.input_output> I've tried webgpu |
| 23:01:14 | FromDiscord | <heysokam> deadlock issues? |
| 23:01:15 | FromDiscord | <fabric.input_output> it's nice |
| 23:01:35 | FromDiscord | <fabric.input_output> In reply to @heysokam "deadlock issues?": yeah, multiple queues or smth from multiple threads it deadlocks I've heard |
| 23:01:57 | FromDiscord | <heysokam> that sounds like a fixable bug |
| 23:03:18 | FromDiscord | <heysokam> btw, if not aware, I spent some months writing this↵https://github.com/heysokam/wgpu↵must be really outdatet, could be helpful as a reference since I figured out all the integration with cargo+nimvm |
| 23:04:02 | FromDiscord | <fabric.input_output> https://github.com/romdotdog/comet/blob/main/src/webgpu.nim |
| 23:04:24 | FromDiscord | <heysokam> should be possible to do the same in a more reliable way with futhark↵I started with the idea, but I don't use nim much so I didn't complete it |
| 23:05:16 | FromDiscord | <heysokam> In reply to @fabric.input_output "https://github.com/romdotdog/comet/blob/main/src/we": oh, nice bindings, wrapping the `.js` api! was not aware, but that's really cool. ty for sharing |
| 23:06:17 | FromDiscord | <fabric.input_output> In reply to @heysokam "should be possible to": I could've used c++ or rust for this but c++, uhh, kinda sucks and rust is too annoying when just trying to get something to work |
| 23:06:18 | FromDiscord | <Daniel Belmes> In reply to @heysokam "iirc, last thing that": They are unstable on the main branch. Much more stable in the PR I have open. I just need to remake the example code. However, that being said. Highly recommend using wgpu/webgpu. |
| 23:06:55 | FromDiscord | <Daniel Belmes> If you want to learn Vulkan than use Vulkan and it's helpful for industry work. But if you're just trying to make games or a cad application. wgpu is great. |
| 23:06:59 | FromDiscord | <Daniel Belmes> (edit) "If you want to learn Vulkan than use Vulkan and it's helpful for industry work. But if you're just trying to make games or a cad application. wgpu is ... great." added "more than" |
| 23:07:04 | FromDiscord | <heysokam> In reply to @fabric.input_output "I could've used c++": yea, I feel you. cpp is so slow to compile, and rust is tricky to work with |
| 23:07:27 | FromDiscord | <fabric.input_output> eh not really slow, more like delicate |
| 23:07:29 | FromDiscord | <Daniel Belmes> (edit) "code." => "code and i'll merge it into mainline." |
| 23:07:34 | FromDiscord | <fabric.input_output> footguns and whatever |
| 23:07:41 | FromDiscord | <fabric.input_output> I don't have the time for those |
| 23:08:22 | FromDiscord | <heysokam> wdym not slow? its like 100-200% slower to compile anything than C or Cpp. even more when linking heavy projects like clang/llvm |
| 23:08:36 | FromDiscord | <heysokam> (edit) "Cpp." => "Nim." |
| 23:08:39 | FromDiscord | <fabric.input_output> yeah but I'm not doing heavy stuff anyways |
| 23:08:55 | FromDiscord | <fabric.input_output> it's not really my concern is the main point |
| 23:09:23 | FromDiscord | <fabric.input_output> and clang/llvm are absolute units I tried compiling them from source took an entire day |
| 23:09:25 | FromDiscord | <heysokam> even linking to glfw.hpp or vulkan.hpp is heavy in cpp land 🙈 |
| 23:09:36 | FromDiscord | <fabric.input_output> that's why I just use the C api |
| 23:10:30 | FromDiscord | <heysokam> In reply to @fabric.input_output "and clang/llvm are absolute": yea, aware. I'm just talking about an app that dynamically links to them. 500sloc importing clang takes literally 2minutes ⚰️ |
| 23:10:46 | FromDiscord | <fabric.input_output> 💀 |
| 23:11:59 | FromDiscord | <heysokam> but yea, back on topic. I'm with Daniel. Consider learning wgpu to replace opengl↵vulkan is only good if you plan on becoming a graphics engineer by trade |
| 23:12:34 | FromDiscord | <fabric.input_output> ok I'll consider wgpu more but I wanna give vulkan a try first |
| 23:13:20 | FromDiscord | <heysokam> makes sense. you will quickly see the difference once you create a triangle in both |
| 23:13:38 | FromDiscord | <heysokam> wgpu is like vulkan without the boilerplate crazyness, essentially |
| 23:13:54 | FromDiscord | <fabric.input_output> yeah |
| 23:14:02 | FromDiscord | <heysokam> the workflow and api is almost identical, minus a couple of differences |
| 23:14:15 | FromDiscord | <heysokam> at least when talking about wgpu-native |
| 23:14:17 | FromDiscord | <fabric.input_output> I've used the compute shaders more than the graphics stuff tho |
| 23:14:38 | FromDiscord | <heysokam> the graphics api is a pleasure to work with |
| 23:15:40 | FromDiscord | <heysokam> there is a learnwebgpu c++ tutorial out there. follow it with nim when you have the time↵i wish I started with that and not opengl, its so good and so much easier to learn |
| 23:16:44 | FromDiscord | <fabric.input_output> ye |
| 23:16:49 | FromDiscord | <fabric.input_output> I've seen it |
| 23:37:48 | FromDiscord | <Daniel Belmes> In reply to @fabric.input_output "ok I'll consider wgpu": Redo the triangle example in my wip branch 🙏🏼🤞🤞🤞 The old one doesn’t work with current Vulkan version🥲 |
| 23:38:17 | FromDiscord | <Daniel Belmes> I just burn out after work week and haven’t been getting to it. |
| 23:48:38 | FromDiscord | <heysokam> same |