If you look at that thread, you'll see I've paid out quite a lot in bounties, somewhere around 50-60kUSD (the amount is not quite precise, because some bounties I completed myself without paying, and others I paid extra when the work turned out to be more than expected). In exchange, I did manage to get quite a lot of work done for that cost
You do get some trash, it does take significant work to review, and not everything is amenable to bounties. But for projects that already have interested users and potential collaborators, sometimes 500-1000USD in cash is enough motivation for someone to go from curious to engaged. And if I can pay someone 500-1000USD to save me a week of work (and associated context switching) it can definitely be worth the cost.
The bounties are certainly not a living wage for people, especially compared to my peers making 1mUSD/yr at some big tech FAANG. It's just a token of appreciation that somehow feels qualitatively different from the money that comes in your twice-monthly paycheck
Is this the standard way to do bounties, where you take applications and then choose someone to attempt the bounty? I always thought you'd just state the requirements and the bounty and then screen the submissions and chose a winner.
Granted this does feel a bit less like asking for spec work so I can see why they might have chosen to go this way instead of generically accepting bounties.
I only briefly glanced at your project, but it doesn’t look like a commercial offering or a component of one… what is your motivation for paying people to do this work? I would think bounties would be used more often by companies who need some open source feature for interoperability or integration purposes…
Having more money than free time but still wanting a thing to get done.
Lots of folks pay good money for hobbies (video games, golf fees, bicycle purchases, etc.).
You could deduct them if you have a corporation with some reasonable claim to the IP behind the projects, or a clear business reason for needing the features. There’s probably no clear settled tax law on the specific topic, but I’m sure it would pass an audit as long as there isn’t some egregiously obvious non-business related work being bountied.
Several years ago I made a bare metal notion to obsidian conversion scripts. At the time there wasn't any Bases available so databases was just turned in to csv table. It was relatively simple, no dependency python script and just export your notion notes as a zips of markdowns and then check every file to fix linking and the weird naming (with some caveat that not all links are properly exported to markdown links by notion)
Today I learned that obsidian have an API towards it. Still I wonder why it's not just easier to use notions "download your pages as markdown". Notion would very dislike an API that allows users to migrate away from it and probably actively would sabotage it. "download notes as markdown" is however a user service, which they probably don't want to break. (maybe now that they added an offline mode too late, I don't know)
(I work at Notion and built the html exporter during my hiring process work trial in 2019, opinion is my own)
I would love to two-way sync Notion <-> Obsidian vault. Notion is focused on online collaboration, Obsidian is focused on file-over-app personal software customization; there’s so much Obsidian can do especially with plugins that Notion isn’t able to address. If we can make the two work together even if it’s not perfectly seamless, it would extend usefulness of both tools by uniting their strengths and avoiding the tradeoff of their weaknesses.
If only I had an extra 24h per day I’d build it myself, but it needs some fairly complex machinery for change tracking & merging which would require ongoing support so it’s not something I can tackle responsibly as a weekend project.
At the least we could offer YAML frontmatter as an option for Notion’s markdown export feature. Maybe I can get to that today I have a few spare hours.
In the age of Claude Code having real-time collaboration + local-first markdown + easy to write custom plugins is the future. It makes no sense to lock up your documents in a saas product that gatekeeps your access to using AI on your own documents.
That's why I've been working Relay [0] - a privacy-preserving local-first collaboration plugin for Obsidian.
Our customers really like being able to self-host Relay Servers for complete document privacy while using our global identity system to do cross-org collaboration with anyone in the world.
We offer Notion MCP since most users prefer the ease of GUI, but agree that Claude code CLI + markdown files is handy. If we had the Obsidian md style sync we could sync to an obsidian vault in a Git repo just as well. We have lots of docs in Notion as you might imagine and having them synced into the git repo could improve their hit rate w Claude and all the other menagerie of code agents without needing to hook them all up w MCP and waste time on remote tool calls.
Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
> Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
From my own experience, I don't think that's the case. I've wrote a similar sync-script obsidian<->notion-databases myself some months ago, and I also used AI in the beginning; but I learned really fast what an annoying mess Notions API is, and how fast LLMs are hanging up on edge-cases. AI is good to get a start into the API, but at the end you still have to fix it up yourself.
LLMs are wonderful for migration. Also, are good at exploring APIs.
A month ago I migrated company's website and blog from Framer to Astro (https://quesma.com/blog/ is you would like to see the end result).
This weekend I created a summary of Grafana dashboard data. LLMs are tireless at checking hypothesis, running grunt code, seeing results, and iterating on that.
What takes more than a single is to check if the result is correct (nothing missed, nothing confabulated, no default fallbacks) and to maintain code quality (I refactor early and often, here is a place in Claude Code that there is no other way than using Opus 4.1). Most of my time spend talking with Claude Code ais in refactoring - and it requires most knowledge of tooling, abstraction, etc.
It’s funny reading these threads. A gif of a few clicks is evidence “it works” for the author. It’s like citing lines of code as a measure of your productivity. Appears impressive on first glance, but any expert will call you out as a snake oil salesman.
My guess is this is close to the level of testing they put forth for ensuring the AI generated code works (based off my experience with other AI heavy devs). They didn’t take any time to thoroughly review nor understand the code. A large file doesn’t necessarily mean shoddy work, but it certainly indicates it’s likely that.
Can't help but think if the author of that PR had been less defeatist and snarky they would have had a chance at decent discussion about it being a viable option (with AI).
I don’t get the llm shilling. If you think you can earn 50k with some prompts, then earn it. Why _instead_ shill for llms? Feels like stock traders having courses for how YOU could earn big bucks. They themselves have photos taken with one day rented Ferraris…
We all lean on LLMs in one way or another, but HN is becoming infested with wishful prompt engineers. Show, dont tell. Compete on the market instead of yet another PoC.
The bounty here is just $5k, and if you read my comment, I’m suggesting that the maintainer(s), even with LLMs, will likely spend a similar amount in inference + cost of their own time, however they’ll likely produce a solution more robust than what the bounty alone will produce.
To be clear: I’m not advocating that someone simply vibe-codes this up.
That’s already happening (2 PRs when this hit HN), both with negative feedback.
I’m suggesting that the maintainers should give LLM-assisted dev a try here, as they already have context on the Obsidian-side API.
In addition to what's already in the thread, I assume by now somebody has vibecoded an agent to scan GitHub for bounties and then automatically vibe up a corresponding solution. Will be a fun source of spam for anyone who wants to do the right thing and pay people for good work.
I recently got my first AI generated PR for a project I maintain and it was honestly a little stressful.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
> I hope we normalize just admitting when most of a piece of code is AI generated.
People using AI tools in their work is becoming normal. In the end, it doesn't matter how the code is created if the code works and is otherwise high quality. The person contributing is responsible for checking the quality of their contributions. Generally, a pull request that changes half the system for no good reason without good motivation is clearly not acceptable in most OSS systems. Likewise, a pull request that ignores existing design and conventions is also not acceptable. If you do such a pull request manually, it will probably also get rejected and get told off by repository maintainers.
The beauty of the pull request system is that it puts the responsibility on the PR creator to make sure their pull request is good enough. Creating huge pull requests is generally not appreciated and creates a lot of review overhead. It's also good practice to work via the issue tracker and discuss your plans before you start the work. Especially with bigger changes. The problem here is not necessary AI but but people using AI to create low quality pull requests and people not communicating properly.
I've not yet received any obvious AI generated pull requests on any of my projects. But I've used codex on my own projects for a few pull requests. I'd probably disclose that fact if I was going to contribute something to somebody else's code base and would also take the time to properly clean up the pull request and make sure it delivers as promised.
Not only admitting, it should be law to mark anything AI generated as AI generated. Even if AI contributed just a tiny bit. I dont want to use AI slop, and I should be allowed to make informed decisions based on that preference.
Ah yes the duality of anti-AI crowds on HN. “GenAI is just fancy autocorrect”, and “autocorrect isn’t actually GenAI”.
The thing is, if you’re talking about making laws (as GP is), your “surely people understand this difference” strategy doesn’t matter squat and the impact will be larger than you think.
You don't seem to understand what people mean when they say "AI is just fancy autocorrect". People talk about the little word suggestions over the keyboard, not about correcting spelling. And yes, of course those suggestions are going to be provided by some sort of ML model, and yes if you actually write a whole article just using them, it should be marked as AI generated, but literally no one is doing that. Maybe because it's not fancy enough autocorrect. Either way, this is not the gotcha you think.
A law like this would obviously need some sort of sensible definition of what "AI" means in this context. Online translation tools also use ML models and even systems to unlock your device with your face do, so classifying all of that as "AI contributions" would make the definition completely useless.
I assume the OP was talking about things like LLMs and diffusion models which one could definitely single out for regulatory purposes. At the end of the day I don't think it would ever be realistically possible to have a law like this anyway, at least not one that wouldn't come with a bunch of ambiguity that would need to be resolved in court.
OK, so define it for us, please. Because, once again, this thread is talking about introducing laws about "AI". OP was talking about LLMs you say - So SLMs then are fine? If not, then where is the boundary? If they're fine then congratulations you have created a new industry of people pushing the boundaries of what SLMs can do, as well as how they are defined.
Laws are built on definitions and this hand-wavy BS is how we got nonsense like the current version of the AI act.
Why are you so mad at me, I'm not even the OP you should ask these questions. I'm also not convinced we need regulation like this in the first place, so I can't tell you where this boundary should be, but a boundary could certainly be found and it would be beyond simple spellchecking autocorrect.
I also don't understand why you think this would be so impossible to define. There are regulations for all kinds of areas where specific things are targeted like chemicals or drugs and just because some of these have incentivized people to slightly change a regulated thing into an unregulated thing does not mean we don't regulate these areas at all. So how are AI systems so different that you think it'd be impossible to find an adequate definition?
Having once used the Notion API to build an OPEN API doc generator, I pity whoever takes this on. The API was painful to integrate with, full of limitations and nowhere near feature parity with the Notion UI itself
Unless you've already done projects in both. Then, it might seem trivial? Idk. I haven't looked at either. But if there is such a person out there, with the spare time to look into it, they might be ideally suited!
Why? It doesn't say you need to have extensive experience with them. I would assume this is mostly to dissuade applicants that are not aware of the potential challenges ahead.
This "exploring" can take tremendous amounts of time, depending on the complexity of these APIs. My time is worth a lot to myself. I am not going to spend many hours for a chance of winning 5k$. If this takes a week off of my free time its not worth 5k to me.
My obsidian is slow to start on my phone (~30s, latest iPhone) and even on my computer it’s ~10s. I probably have 1000 notes, no back links, and I’m using the Vim extension and the default theme. It uses iCloud backup.
I can’t figure out why it’s so damn slow. But also, how is it any better than Notion at that point?
It sounds like iCloud's automatic offloading "feature" which deletes your local files and re-downloads them as needed. Go to the iCloud folder for Obsidian and set it to "Keep downloaded".
The mobile app had a notice about iCloud for a long time (I forget if it's gone now or not), but usually you're running into Obsidian trying to sync all of the files and rebuild it's internal cache when it opens instead of being able to do any background sync.
As for why is it better than Notion at that point, well, if you wanted to you could use a "faster" app like iA Writer on your phone, or anything that can open Markdown. That remains its biggest benefit, you're never locked in to files that are only on someone else's server.
I've had a pretty good experience offering bounties on my own projects:
- https://github.com/orgs/com-lihaoyi/discussions/6
If you look at that thread, you'll see I've paid out quite a lot in bounties, somewhere around 50-60kUSD (the amount is not quite precise, because some bounties I completed myself without paying, and others I paid extra when the work turned out to be more than expected). In exchange, I did manage to get quite a lot of work done for that cost
You do get some trash, it does take significant work to review, and not everything is amenable to bounties. But for projects that already have interested users and potential collaborators, sometimes 500-1000USD in cash is enough motivation for someone to go from curious to engaged. And if I can pay someone 500-1000USD to save me a week of work (and associated context switching) it can definitely be worth the cost.
The bounties are certainly not a living wage for people, especially compared to my peers making 1mUSD/yr at some big tech FAANG. It's just a token of appreciation that somehow feels qualitatively different from the money that comes in your twice-monthly paycheck
Is this the standard way to do bounties, where you take applications and then choose someone to attempt the bounty? I always thought you'd just state the requirements and the bounty and then screen the submissions and chose a winner.
Granted this does feel a bit less like asking for spec work so I can see why they might have chosen to go this way instead of generically accepting bounties.
It's not a standard way to do it. Different projects adopt different approaches.
I posted a list of projects offering bounties elsewhere [1] in the thread.
[1] https://news.ycombinator.com/item?id=45278787
I only briefly glanced at your project, but it doesn’t look like a commercial offering or a component of one… what is your motivation for paying people to do this work? I would think bounties would be used more often by companies who need some open source feature for interoperability or integration purposes…
Having more money than free time but still wanting a thing to get done. Lots of folks pay good money for hobbies (video games, golf fees, bicycle purchases, etc.).
Can you deduct these expenditures fully in income tax?
You could deduct them if you have a corporation with some reasonable claim to the IP behind the projects, or a clear business reason for needing the features. There’s probably no clear settled tax law on the specific topic, but I’m sure it would pass an audit as long as there isn’t some egregiously obvious non-business related work being bountied.
Several years ago I made a bare metal notion to obsidian conversion scripts. At the time there wasn't any Bases available so databases was just turned in to csv table. It was relatively simple, no dependency python script and just export your notion notes as a zips of markdowns and then check every file to fix linking and the weird naming (with some caveat that not all links are properly exported to markdown links by notion)
Today I learned that obsidian have an API towards it. Still I wonder why it's not just easier to use notions "download your pages as markdown". Notion would very dislike an API that allows users to migrate away from it and probably actively would sabotage it. "download notes as markdown" is however a user service, which they probably don't want to break. (maybe now that they added an offline mode too late, I don't know)
(I work at Notion and built the html exporter during my hiring process work trial in 2019, opinion is my own)
I would love to two-way sync Notion <-> Obsidian vault. Notion is focused on online collaboration, Obsidian is focused on file-over-app personal software customization; there’s so much Obsidian can do especially with plugins that Notion isn’t able to address. If we can make the two work together even if it’s not perfectly seamless, it would extend usefulness of both tools by uniting their strengths and avoiding the tradeoff of their weaknesses.
If only I had an extra 24h per day I’d build it myself, but it needs some fairly complex machinery for change tracking & merging which would require ongoing support so it’s not something I can tackle responsibly as a weekend project.
At the least we could offer YAML frontmatter as an option for Notion’s markdown export feature. Maybe I can get to that today I have a few spare hours.
In the age of Claude Code having real-time collaboration + local-first markdown + easy to write custom plugins is the future. It makes no sense to lock up your documents in a saas product that gatekeeps your access to using AI on your own documents.
That's why I've been working Relay [0] - a privacy-preserving local-first collaboration plugin for Obsidian.
Our customers really like being able to self-host Relay Servers for complete document privacy while using our global identity system to do cross-org collaboration with anyone in the world.
[0] https://relay.md
We offer Notion MCP since most users prefer the ease of GUI, but agree that Claude code CLI + markdown files is handy. If we had the Obsidian md style sync we could sync to an obsidian vault in a Git repo just as well. We have lots of docs in Notion as you might imagine and having them synced into the git repo could improve their hit rate w Claude and all the other menagerie of code agents without needing to hook them all up w MCP and waste time on remote tool calls.
Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
> Everyone is looking down on LLM-assisted dev here, but I think it's a great fit.
From my own experience, I don't think that's the case. I've wrote a similar sync-script obsidian<->notion-databases myself some months ago, and I also used AI in the beginning; but I learned really fast what an annoying mess Notions API is, and how fast LLMs are hanging up on edge-cases. AI is good to get a start into the API, but at the end you still have to fix it up yourself.
LLMs are wonderful for migration. Also, are good at exploring APIs.
A month ago I migrated company's website and blog from Framer to Astro (https://quesma.com/blog/ is you would like to see the end result).
This weekend I created a summary of Grafana dashboard data. LLMs are tireless at checking hypothesis, running grunt code, seeing results, and iterating on that.
What takes more than a single is to check if the result is correct (nothing missed, nothing confabulated, no default fallbacks) and to maintain code quality (I refactor early and often, here is a place in Claude Code that there is no other way than using Opus 4.1). Most of my time spend talking with Claude Code ais in refactoring - and it requires most knowledge of tooling, abstraction, etc.
Someone's given it a shot: https://github.com/obsidianmd/obsidian-importer/pull/424
It’s funny reading these threads. A gif of a few clicks is evidence “it works” for the author. It’s like citing lines of code as a measure of your productivity. Appears impressive on first glance, but any expert will call you out as a snake oil salesman.
My guess is this is close to the level of testing they put forth for ensuring the AI generated code works (based off my experience with other AI heavy devs). They didn’t take any time to thoroughly review nor understand the code. A large file doesn’t necessarily mean shoddy work, but it certainly indicates it’s likely that.
Can't help but think if the author of that PR had been less defeatist and snarky they would have had a chance at decent discussion about it being a viable option (with AI).
It's 1100 lines of code in a single file that nobody understands. That's instant spaghetti right there, not a valid PR.
At least have it be split into some files and structured in some way.
how is a single 1100 line file that handles Notion’s API format “spaghetti”?
spaghetti code is code that snakes through the code base in a hard-to-follow way
[dead]
I don’t get the llm shilling. If you think you can earn 50k with some prompts, then earn it. Why _instead_ shill for llms? Feels like stock traders having courses for how YOU could earn big bucks. They themselves have photos taken with one day rented Ferraris…
We all lean on LLMs in one way or another, but HN is becoming infested with wishful prompt engineers. Show, dont tell. Compete on the market instead of yet another PoC.
Uh, sorry?
The bounty here is just $5k, and if you read my comment, I’m suggesting that the maintainer(s), even with LLMs, will likely spend a similar amount in inference + cost of their own time, however they’ll likely produce a solution more robust than what the bounty alone will produce.
To be clear: I’m not advocating that someone simply vibe-codes this up.
That’s already happening (2 PRs when this hit HN), both with negative feedback.
I’m suggesting that the maintainers should give LLM-assisted dev a try here, as they already have context on the Obsidian-side API.
[dead]
>Feels like stock traders having courses for how YOU could earn big bucks. They themselves have photos taken with one day rented Ferraris…
Do you think the person you are replying to is Sam Altman?
[dead]
In addition to what's already in the thread, I assume by now somebody has vibecoded an agent to scan GitHub for bounties and then automatically vibe up a corresponding solution. Will be a fun source of spam for anyone who wants to do the right thing and pay people for good work.
I recently got my first AI generated PR for a project I maintain and it was honestly a little stressful.
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
> I hope we normalize just admitting when most of a piece of code is AI generated.
People using AI tools in their work is becoming normal. In the end, it doesn't matter how the code is created if the code works and is otherwise high quality. The person contributing is responsible for checking the quality of their contributions. Generally, a pull request that changes half the system for no good reason without good motivation is clearly not acceptable in most OSS systems. Likewise, a pull request that ignores existing design and conventions is also not acceptable. If you do such a pull request manually, it will probably also get rejected and get told off by repository maintainers.
The beauty of the pull request system is that it puts the responsibility on the PR creator to make sure their pull request is good enough. Creating huge pull requests is generally not appreciated and creates a lot of review overhead. It's also good practice to work via the issue tracker and discuss your plans before you start the work. Especially with bigger changes. The problem here is not necessary AI but but people using AI to create low quality pull requests and people not communicating properly.
I've not yet received any obvious AI generated pull requests on any of my projects. But I've used codex on my own projects for a few pull requests. I'd probably disclose that fact if I was going to contribute something to somebody else's code base and would also take the time to properly clean up the pull request and make sure it delivers as promised.
I can’t stand people who open a pull request and ask for review without first reviewing their own diff. They should have caught that themselves.
Not only admitting, it should be law to mark anything AI generated as AI generated. Even if AI contributed just a tiny bit. I dont want to use AI slop, and I should be allowed to make informed decisions based on that preference.
Did you by any chance type this comment on a device that has autocorrect enabled?
Autocorrect is not generative AI in the way that anyone is using that word. Also autocorrect doesn't even need to use any sort of ML model.
Ah yes the duality of anti-AI crowds on HN. “GenAI is just fancy autocorrect”, and “autocorrect isn’t actually GenAI”.
The thing is, if you’re talking about making laws (as GP is), your “surely people understand this difference” strategy doesn’t matter squat and the impact will be larger than you think.
You don't seem to understand what people mean when they say "AI is just fancy autocorrect". People talk about the little word suggestions over the keyboard, not about correcting spelling. And yes, of course those suggestions are going to be provided by some sort of ML model, and yes if you actually write a whole article just using them, it should be marked as AI generated, but literally no one is doing that. Maybe because it's not fancy enough autocorrect. Either way, this is not the gotcha you think.
But the original poster said:
>Even if AI contributed just a tiny bit.
Which would imply autocorrect should be reported as AI use.
A law like this would obviously need some sort of sensible definition of what "AI" means in this context. Online translation tools also use ML models and even systems to unlock your device with your face do, so classifying all of that as "AI contributions" would make the definition completely useless.
I assume the OP was talking about things like LLMs and diffusion models which one could definitely single out for regulatory purposes. At the end of the day I don't think it would ever be realistically possible to have a law like this anyway, at least not one that wouldn't come with a bunch of ambiguity that would need to be resolved in court.
OK, so define it for us, please. Because, once again, this thread is talking about introducing laws about "AI". OP was talking about LLMs you say - So SLMs then are fine? If not, then where is the boundary? If they're fine then congratulations you have created a new industry of people pushing the boundaries of what SLMs can do, as well as how they are defined.
Laws are built on definitions and this hand-wavy BS is how we got nonsense like the current version of the AI act.
Why are you so mad at me, I'm not even the OP you should ask these questions. I'm also not convinced we need regulation like this in the first place, so I can't tell you where this boundary should be, but a boundary could certainly be found and it would be beyond simple spellchecking autocorrect.
I also don't understand why you think this would be so impossible to define. There are regulations for all kinds of areas where specific things are targeted like chemicals or drugs and just because some of these have incentivized people to slightly change a regulated thing into an unregulated thing does not mean we don't regulate these areas at all. So how are AI systems so different that you think it'd be impossible to find an adequate definition?
[flagged]
Having once used the Notion API to build an OPEN API doc generator, I pity whoever takes this on. The API was painful to integrate with, full of limitations and nowhere near feature parity with the Notion UI itself
[dead]
As someone who wrote a fair share of notion API code - the 5,000$ bounty is not enough and I'm only half-joking here.
That being said, yay open source bounties! People should do more of those.
There are a few more bounties like this out there.
1. Tenstorrent https://github.com/tenstorrent/tt-metal/issues?q=is%3Aissue%... $200 - $3,000 bounties
2. microG https://github.com/microg/GmsCore/issues/2994 $10,000 bounty
3. Li Haoyi https://github.com/orgs/com-lihaoyi/discussions/6 multiple bounties (already mentioned upthread)
4. Algora also hosts bounties for COSS (Commercial OSS) https://algora.io/bounties
This seems very exploitative of their user base. In a way that’s becoming more and more common.
Although Obsidian isn’t open source, the community has a very similar vibe. Very anti-big-corporate-overlord.
But maybe not, maybe the world of bounties is just one im not in the loop on and this is common.
How is it exploitative?
There are also open bounties by comma.ai, is it becoming more common? https://github.com/orgs/commaai/projects/26/views/1
Comma.ai, since its founding, and Tinygrad now, both started by George Hotz, only hire candidates who solve their bounties first.
https://tinygrad.org/#worktiny
What’s the easiest way to convert all dataviews in an existing Obsidian vault to Bases?
Last I looked DataView is way more powerful than Bases. So, "convert all dataviews" is likely impossible.
There's this community-made Dataview to Bases script:
https://github.com/Quorafind/Bases-Toolbox
> Please only apply if you have taken time to explore the Importer codebase, as well as the Notion API.
Suddenly 5k$ does not sound as good
Unless you've already done projects in both. Then, it might seem trivial? Idk. I haven't looked at either. But if there is such a person out there, with the spare time to look into it, they might be ideally suited!
Why? It doesn't say you need to have extensive experience with them. I would assume this is mostly to dissuade applicants that are not aware of the potential challenges ahead.
This "exploring" can take tremendous amounts of time, depending on the complexity of these APIs. My time is worth a lot to myself. I am not going to spend many hours for a chance of winning 5k$. If this takes a week off of my free time its not worth 5k to me.
If you get paid more than $5k a week, this isn't for you clearly.
If you think this is what I wrote, you should go to reading comprehension courses.
While I understand you wrote "free time", this isn't Reddit. Keep the snarkiness to a minimum.
This website is worse than reddit. Pretentious AI bros everywhere. It even has the reddit color.
My obsidian is slow to start on my phone (~30s, latest iPhone) and even on my computer it’s ~10s. I probably have 1000 notes, no back links, and I’m using the Vim extension and the default theme. It uses iCloud backup.
I can’t figure out why it’s so damn slow. But also, how is it any better than Notion at that point?
It sounds like iCloud's automatic offloading "feature" which deletes your local files and re-downloads them as needed. Go to the iCloud folder for Obsidian and set it to "Keep downloaded".
Wow. I think this is the answer! Thanks!
The mobile app had a notice about iCloud for a long time (I forget if it's gone now or not), but usually you're running into Obsidian trying to sync all of the files and rebuild it's internal cache when it opens instead of being able to do any background sync.
As for why is it better than Notion at that point, well, if you wanted to you could use a "faster" app like iA Writer on your phone, or anything that can open Markdown. That remains its biggest benefit, you're never locked in to files that are only on someone else's server.
[dead]