Hacker Newsnew | past | comments | ask | show | jobs | submit | b33j0r's commentslogin

Just don’t take one if another one is operating nearby. If they see another waymo, having passed the insecure emotional Turing test, they get self-conscious and wander the neighborhood backstreets until the other one has dropped off its passengers.

(Just experienced this multiple times in Phoenix. It’s impressive at navigating and braking, but not rational planning or flocking.)


This has not been my experience at all and I take Waymos pretty frequently, especially at popular areas like concerts or airports you'll see a bunch of them dropping off/picking up people without issues.

Ok, so I don’t have an NFL team. I played in high school and like the sport, but find it difficult to be loyal to a color and a logo. I also never watch ads at home on any platform.

So. Am I the only one who kind of likes watching the commercials more than the game when my family or friends make me watch football? They are entertaining when you only see them every now and then.

Now, banner ads are not in the same category. But above is a real use-case for enjoyment of ads.


They get old fast. A few really iconic adverts I could imagine watching once per decade indefinitely, but for most the first time is enough, and where an agency made several similar ads I probably don't need to see all of them even once. Here's an example of an iconic ad I grew up with that I could imagine wanting to see again some day:

https://www.youtube.com/watch?v=zPFrTBppRfw

https://en.wikipedia.org/wiki/Accrington_Stanley_F.C. -- for US readers, the UK has a "football pyramid" in which there's a hierarchy, the elite sport teams you've probably heard of compete in a national league, but every year the worst of those teams can be replaced by the best of those from the league below, and this repeats in layers like a pyramid, until eventually you're talking about friends or co-workers, who play other similar teams in their local area maybe in some public park for the love of the game. Accrington Stanley is in the middle of that pyramid, it's hiring professional players and has a dedicated ground to play football, but we're not talking superstar lifestyles or billion dollar stadiums.


The only thing I agree with the current US president about is that American Football should be called something else.

- Helmetball

- Gridiron

- Scrimmage

- Brain-B-Gone

- Turnover (if you are Bo Nix)

- Fumblederp

- Kicks and Giggles


Lock-free queues and 16-core processors exist though. I use actors for the abstraction primarily anyway.

I usually do most of the engineering and it works great for writing the code. I’ll say:

> There should be a TaskManager that stores Task objects in a sorted set, with the deadline as the sort key. There should be methods to add a task and pop the current top task. The TaskManager owns the memory when the Task is in the sorted set, and the caller to pop should own it after it is popped. To enforce this, the caller to pop must pass in an allocator and will receive a copy of the Task. The Task will be freed from the sorted set after the pop.

> The payload of the Task should be an object carrying a pointer to a context and a pointer to a function that takes this context as an argument.

> Update the tests and make sure they pass before completing. The test scenarios should relate to the use-case domain of this project, which is home automation (see the readme and nearby tests).


I feel that with such an elaborated description you aren't too far away from writing that yourself.

If that's the input needed, then I'd rather write code and rely on smarter autocomplete, so meanwhile I write the code and think about it, I can judge whether the LLM is doing what I mean to do, or straying away from something reasonable to write and maintain.


Yeah, I feel like I get really good results from AI, and this is very much how I prompt as well. It just takes care of writing the code, making sure to update everything that is touched by that code guided by linters and type-checkers, but it's always executing my architecture and algorithm, and I spend time carefully trying to understand the problem before I even begin.

But this is what I don't get. Writing code is not that hard. If the act of physically typing my code out is a bottleneck to my process, I am doing something wrong. Either I've under-abstracted, or over-abstracted, or flat out have the wrong abstractions. It's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction.

To me this reads like people have learned to put up with poor abstractions for so long that having the LLM take care of it feels like an improvement? It's the classic C++ vs Lisp discussion all over again, but people forgot the old lessons.


> Writing code is not that hard.

It's not that hard, but it's not that easy. If it was easy, everyone would be doing it. I'm a journalist who learned to code because it helped me do some stories that I wouldn't have done otherwise.

But I don't like to type out the code. It's just no fun to me to deal with what seem to me arbitrary syntax choices made by someone decades ago, or to learn new jargon for each language/tool (even though other languages/tools already have jargon for the exact same thing), or to wade through someone's undocumented code to understand how to use an imported function. If I had a choice, I'd rather learn a new human language than a programming one.

I think people like me, who (used to) code out of necessity but don't get much gratification out of it, are one of the primary targets of vibe coding.


I'm pretty damn sure the parent, by saying "writing code" meant the physical act of pushing down buttons to produce text, not the problem solving process that preceeds writing said code.

This. Most people defer the solving of hard problems to when they write the code. This is wrong, and too late to be effective. In one way, using agents to write code forces the thinking to occur closer to the right level - not at the code level - but in another way, if the thinking isn’t done or done correctly, the agent can’t help.

Disagree. No plan survives first contact.

I can spend all the time I want inside my ivory tower, hatching out plans and architecture, but the moment I start hammering letters in the IDE my watertight plan suddenly looks like Swiss cheese: constraints and edge cases that weren't accounted for during planning, flows that turn out to be unfeasible without a clunky implementation, etc...

That's why Writing code has become my favorite method of planning. The code IS the spec, and English is woefully insufficient when it comes to precision.

This makes Agentic workflows even worse because you'll only your architectural flaws much much later down the process.


I also think this is why AI works okay-ish on tiny new greenfield webapps and absolutely doesn't on large legacy software.

You can't accurately plan every little detail in an existing codebase, because you'll only find out about all the edge cases and side effects when trying to work in it.

So, sure, you can plan what your feature is supposed to do, but your plan of how to do that will change the minute you start working in the codebase.


Yeah, I think this is the fundamental thing I'm trying to get at.

If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.

Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.

Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock time.

Crucially, this specification or design document doesn't have to remain static. As you would discover problems or limitations or unknown unknowns, you can revise it and then keep executing on it, meaning it's a living documentation of your overall architecture and goals as they change. This means that you can really stay thinking about the high level instead of getting sucked into the low level. Coding agents also make it much easier to send something off to vibe out a prototype or explore the code base of a library or existing project in detail to figure out the feasibility of some idea, meaning that the parts that traditionally would have been a lot of effort to verify that what your planning makes sense have a much lower activation energy. so you're more likely to actually try things out in the process of building a spec


I believe programming languages are the better language for planning architecture, the algorithms, the domain model, etc... compared to English.

The way I develop mirrors the process of creating said design document. I start with a high level overview, define what Entities the program should represent, define their attributes, etc... only now I'm using a more specific language than English. By creating a class or a TS interface with some code documentation I can use my IDEs capabilities to discover connections between entities.

I can then give the code to an LLM to produce a technical document for managers or something. It'll be a throwaway document because such documents are rarely used for actual decision making.

> Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on;

I do this with code, and the IDE is much better than MS Word or whatevah at detecting my logical inconsistencies.


The problem is that you actually can't really model or describe a lot of the things that I do with my specifications using code without just ending up fully writing the low level code. Most languages don't have a type system that actually lets you describe the logic and desired behavior of various parts of the system and which functions should call which other functions and what your concurrency model is and so on without just writing the specific code that does it; in fact, I think the only languages that would allow you to do something like that would have to be like dependently typed languages or languages adjacent to formal methods. This is literally what the point of pseudocode and architecture graphs and so on are for.

Ah, perhaps. I understood it a little more broadly to include everything beyond pseudocode, rather than purely being able to use your fingers. You can solve a problem with pseudocode, and seasoned devs won't have much of an issue converting it to actual code, but it's not a fun process for everyone.

Yeah I basically write pseudocode and let the ai take it from there.

But this is exactly my point: if your "code" is different than your "pseudocode", something is wrong. There's a reason why people call Lisp "executable pseudocode", and it's because it shrinks the gap between the human-level description of what needs to happen and the text that is required to actually get there. (There will always be a gap, because no one understands the requirements perfectly. But at least it won't be exacerbated by irrelevant details.)

To me, reading the prompt example half a dozen levels up, reminds me of Greenspun's tenth rule:

> Any sufficiently complicated C++ program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp. [1]

But now the "program" doesn't even have formal semantics and isn't a permanent artifact. It's like running a compiler and then throwing away the source program and only hand-editing the machine code when you don't like what it does. To me that seems crazy and misses many of the most important lessons from the last half-century.

[1]: https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule (paraphrased to use C++, but applies equally to most similar languages)


Replying to sibling comment:

the problem is that you actually have to implement that high level DSL to get Lisp to look like that, and most DSLs are not going to be able to be as concise and abstract as a natural language description of what you want, and then just making sure it resulted in the right thing — which then I'd want to use AI for, to write that initial boilerplate, from a high level description of what the DSL should do.

And a Lisp macro DSL is not going to help with automating refactors, automatically iterating to take care of small compiler issues or minor bugs without your involvement so you can focus on the overall goal, remembering or discovering specific library APIs or syntax, etc.


I think of it more like moving from sole developer to a small team lead. Which I have experienced in my career a few times.

I still write my code in all the places I care about, but I don’t get stuck on “looking up how to enable websockets when creating the listener before I even pass anything to hyper.”

I do not care to spend hours or days to know that API detail from personal pain, because it is hyper-specific, in both senses of hyper-specific.

(For posterity, it’s `with_upgrades`… thanks chatgpt circa 12 months ago!)


It's not hard, but it's BORING.

I get my dopamine from solving problems, not trying to figure out why that damn API is returning the wrong type of field for three hours. Claude will find it out in minutes - while I do something else. Or from writing 40 slightly different unit tests to cover all the edge cases for said feature.


> it's time to sit back and figure out why there's a mismatch with the problem domain and come back at it from another direction

But this is exactly what LLMs help me with! If I decide I want to shift the abstractions I'm using in a codebase in a big way, I'd usually be discouraged by all the error, lint, and warning chasing I'd need to do to update everything else; with agents I can write the new code (or describe it and have it write it) and then have it set off and update everything else to align: a task that is just varied and context specific enough that refactoring tools wouldn't work, but is repetitive and time consuming enough that it makes sense to pass off to a machine.

The thing is that it's not necessarily a bottleneck in terms of absolute speed (I know my editor well and I'm a fast typist, and LLMs are in their dialup era) but it is a bottleneck in terms of motivation, when some refactor or change in algorithm I want to make requires a lot of changes all over a codebase, that are boring to make but not quite rote enough to handle with sed or IDE refactoring. It really isn't, for me, even mostly about the inconvenience of typing out the initial code. It's about the inconvenience of trying to munge text from one state to another, or handle big refactors that require a lot of little mostly rote changes in a lot of places; but it's also about dealing with APIs or libraries where I don't want to have to constantly remind myself what functions to use, what to pass as arguments, what config data I need to construct to pass in, etc, or spend hours trawling through docs to figure out how to do something with a library when I can just feed its source code directly to an LLM and have it figure it out. There's a lot of friction and snags to writing code beyond typing that has nothing to do with having come up with a wrong abstraction, that very often lead to me missing the forest for the trees when I'm in the weeds.

Also, there is ALWAYS boilerplate scaffolding to do, even with the most macrotastic Lisp; and let's be real: Lisp macros have their own severe downsides in return for eliminating boilerplate, and Lisp itself is not really the best language (in terms of ecosystem, toolchain, runtime, performance) for many or most tasks someone like me might want to do, and languages adapted to the runtime and performance constraints of their domain may be more verbose.

Which means that, yes, we're using languages that have more boilerplate and scaffolding to do than strictly ideally necessary, which is part of why we like LLMs, but that's just the thing: LLMs give you the boilerplate eliminating benefits of Lisp without having to give up the massive benefits in other areas of whatever other language you wanted to use, and without having to write and debug macro soup and deal with private languages.

There's also how staying out of the code writing oar wells changes how you think about code as well:

If you think through a problem as you're writing the code for it, you're going to end up the wrong creek because you'll have been furiously head down rowing the entire time, paying attention to whatever local problem you were solving or whatever piece of syntax or library trivia or compiler satisfaction game you were doing instead of the bigger picture.

Obviously, before starting writing, you could sit down and write a software design document that worked out the architecture, the algorithms, the domain model, the concurrency, the data flow, the goals, the steps to achieve it and so on; but the problem with doing that without an agent is then it becomes boring. You've basically laid out a plan ahead of time and now you've just got to execute on the plan, which means (even though you might even fairly often revise the plan as you learn unknown unknowns or iterate on the design) that you've kind of sucked all the fun and discovery out of the code rights process. And it sort of means that you've essentially implemented the whole thing twice.

Meanwhile, with a coding agent, you can spend all the time you like building up that initial software design document, or specification, and then you can have it implement that. Basically, you can spend all the time in your hammock thinking through things and looking ahead, but then have that immediately directly translated into pull requests you can accept or iterate on instead of then having to do an intermediate step that repeats the effort of the hammock tim


What you’re describing makes sense, but that type of prompting is not what people are hyping

The more accurate prompt would be “You are a mind reader. Create me a plan to create a task manager, define the requirements, deploy it, and tell me when it’s done.”

And then you just rm -rf and repeat until something half works.


"Here are login details to my hosting and billing provider. Create me a SaaS app where customers could rent virtual pets. Ensure it's AI and blockchain and looks inviting and employ addictive UX. I've attached company details for T&C and stuff. Ensure I start earning serious money by next week. I'll bump my subscription then if you deliver, and if not I will delete my account. Go!"

I haven't tried it, but someone at work suggested using voice input for this because it's so much easier to add details and constraints. I can certainly believe it, but I hate voice interfaces, especially if I'm in an open space setting.

You don't even have to be as organised as in the example, LLMs are pretty good at making something out of ramblings.


This is a good start. I write prompts as if I was instructing junior developer to do stuff I need. I make it as detailed and clear as I can.

I actually don't like _writing_ code, but enjoy reading it. So sessions with LLM are very entertaining, especially when I want to push boundaries (I am not liking this, the code seems a little bit bloated. I am sure you could simplify X and Y. Also think of any alternative way that you reckon will be more performant that maybe I don't know about). Etc.

This doesn't save me time, but makes work so much more enjoyable.


> I actually don't like _writing_ code, but enjoy reading it.

I think this is one of the divides between people who like AI and people who don't. I don't mind writing code per se, but I really don't like text editing — and I've used Vim (Evil mode) and then Emacs (vanilla keybindings) for years, so it's not like I'm using bad tools; it's just too fiddly. I don't like moving text around; munging control structures from one shape to another; I don't like the busy work of copying and pasting code that isn't worth DRYing, or isn't capable of being DRY'd effectively; I hate going around and fixing all the little compiler and linter errors produced by a refactor manually; and I really hate the process of filling out the skeleton of an type/class/whatever architecture in a new file before getting to the meat.

However, reading code is pretty easy for me, and I'm very good at quickly putting algorithms and architectures I have in my head into words — and, to be honest, I often find this clarifies the high level idea more than writing the code for it, because I don't get lost in the forest — and I also really enjoy taking something that isn't quite good enough, that's maybe 80% of the way there, and doing the careful polishing and refactoring necessary to get it to 100%.


I don't want to be "that guy", but I'll indulge myself.

> I think this is one of the divides between people who like AI and people who don't. I don't mind writing code per se, but I really don't like text editing — and I've used Vim (Evil mode) and then Emacs (vanilla keybindings) for years, so it's not like I'm using bad tools; it's just too fiddly.

I feel the same way (to at least some extent) about every language I've used other than Lisp. Lisp + Paredit in Emacs is the most pleasant code-wrangling experience I've ever had, because rather having to think in terms of characters or words, I'm able to think in terms of expressions. This is possible with other languages thanks to technologies like Tree-sitter, but I've found that it's only possible to do reliably in Lisp. When I do it in any other language I don't have an unshakable confidence that the wrangling commands will do exactly what I intend.


A bit different from you

When I code, I mostly go by two perspectives: The software as a process and the code as a communication medium.

With the software as a process, I'm mostly thinking about the semantics of each expressions. Either there's a final output (transient, but important) or there's a mutation to some state. So the code I'm writing is for making either one possible and the process is very pleasing, like building a lego. The symbols are the bricks and other items which I'm using to create things that does what I want.

With the code as communication, I mostly take the above and make it readable. Like organizing files, renaming variables and functions, modularising pieces of code. The intent is for other people (including future me) to be able to understand and modify what I created in the easiest way possible.

So the first is me communicating with the machine, the second is me communicating with the humans. The first is very easy, you only need to know the semantics of the building blocks of the machine. The second is where the craft comes in.

Emacs (also Vim) makes both easy. Code has a very rigid structure and both have tools that let you manipulate these structure either for adding new actions or refine the shape for understanding.

With AI, it feels like painting with a brick. Or transmitting critical information through a telephone game. Control and Intent are lost.


> With AI, it feels like painting with a brick. Or transmitting critical information through a telephone game. Control and Intent are lost.

On the other hand, most AI zealots (Steve Yegge comes readily to mind) don't care about what the code looks like. They never even see it.


Yes! Don't worry about it, I very much agree. However, I do think that even if/when I'm using Lisp and have all the best structural editing capabilities at my disposal, I'd still prefer to have an agent do my editing for me; I'd just be 30% more likely to jump in and write code myself on occasion — because ultimately, even with structural editing, you're still thinking about how to apply this constrained set of operations to manipulate a tree of code to get it to where you want, and then having to go through the grunt work of actually doing that, instead of thinking about what state you want the code to be in directly.

Vehement agreeing below:

S-expressions are a massive boon for text editing, because they allow such incredible structural transformations and motions. The problem is that, personally, I don't actually find Lisp to be the best tool for the job for any of the things I want to do. While I find Common Lisp and to a lesser degree Scheme to be fascinating languages, the state of the library ecosystem, documentation, toolchain, and IDEs around them just aren't satisfactory to me, and they don't seem really well adapted to the things I want to do. And yeah, I could spend my time optimizing Common Lisp with `declare`s and doing C-FFI with it, massaging it to do what I want, that's not what I want to spend my time doing. I want to actually finish writing tools that are useful to me.

Moreover, while I used to have hope for tree-sitter to provide a similar level of structural editing for other languages, at least in most editors I've just not found that to be the case. There seem really to be two ways to use tree-sitter to add structural editing to languages: one, to write custom queries for every language, in order to get Vim style syntax objects, and two, to try to directly move/select/manipulate all nodes in the concrete syntax tree as if they're the same, essentially trying to treat tree-sitter's CSTs like S-expressions.

The problem with the first approach is that you end up with really limited, often buggy or incomplete, language support, and structural editing that requires a lot more cognitive overhead: instead of navigating a tree fluidly, you're having to "think before you act," deciding ahead of time what the specific name, in this language, is for the part of the tree you want to manipulate. Additionally, this approach makes it much more difficult to do more high level, interesting transformations; even simple ones like slurp and barf become a bit problematic when you're dealing with such a typed tree, and more advanced ones like convolute? Forget about it.

The problem with the second approach is that, if you're trying to do generalized tree navigation, where you're not up-front naming the specific thing you're talking about, but instead navigating the concrete syntax tree as if it's S-expressions, you run into the problem the author of Combobulate and Mastering Emacs talks about[1]: CSTs are actually really different from S-expressions in practice, because they don't map uniquely onto source code text; instead, they're something overlaid on top of the source code text, which is not one to one with it (in terms of CST nodes to text token), but many to one, because the CST is very granular. Which means that there's a lot of ambiguity in trying to understand where the user is in the tree, where they think they are, and where they intend to go.

There's also the fact that tree-sitter CSTs contain a lot of unnamed nodes (what I call "stop tokens"), where the delimiters for a node of a tree and its children are themselves children of that node, siblings with the actual siblings. And to add insult to injury, most language syntaces just... don't really lend themselves to tree navigation and transformation very well.

I actually tried to bring structural editing to a level equivalent to the S-exp commands in Emacs recently[2], but ran into all of the above problems. I recently moved to Zed, and while its implementation of structural editing and movement is better than mine, and pretty close to 1:1 with the commands available in Emacs (especially if they accept my PR[3]), and also takes the second, language-agnostic, route, it's still not as intuitive and reliable as I'd like.

[1]: https://www.masteringemacs.org/article/combobulate-intuitive...

[2]: https://github.com/alexispurslane/treesit-sexp

[3]: https://github.com/zed-industries/zed/pull/47571


This is similar to how I prompt, except I start with a text file and design the solution and paste it in to an LLM after I have read it a few times. Otherwise, if I type directly in to the LLM and make a mistake it tends to come back and haunt me later.

Just one more turn before my family can send and receive phone calls on the landline.


Well. You’d have to demonstrate that a[1] is the first offset in an array, and it’s not a great curb appeal to anyone who has programmed computers before.


I don’t see a problem. That’s how people count.


It makes sense when you look at how the numberic for loop looks in Lua.

In Lua you specify the “beginning” and “end” of the iteration, both included. It doesn’t work like in C, where you have an initialization and an invariant. What makes it short in C would make it longer in Lua, and viceversa.

You could argue “why not make loops like C”, then. But that can be extended to the limit: “why have a different language at all?”.


i think i might prefer indexing starting at zero, but it really isn't important. with c it makes total sense for zero-based indexing. frankly though, for lua, how it works and what an array is, it makes more sense for one-based indexing, the only counter-argument being that 1-based indexing puts off people who learned a thing one way and are unable or unwilling to do it a different way. to even include it on a list of considerations for not choosing lua is a bit silly, but to highlight array indexing and only that as the only thing you'd need to know... well i don't know how to put it that wouldn't be impolite.

either way, at least you can't toggle between indexes starting at zero and one, (at least not that i can recall.)


> either way, at least you can't toggle between indexes starting at zero and one

You can, you just have to explicitly assign something to a[0]. Lua doesn't have real arrays, just tables. You have to do it for every table you use/define though, so if you mean "toggle" as in change the default behavior everywhere then I believe you are correct.


iirc that value at key zero won't be included in any array handling functions. if that behavior were toggleable we'd have the kind of nonesense that early APLs allowed before they realized that's a bad thing to stuff in a global variable you can write to at any time in your program.


You implemented type-checking. For a project this ambitious, I am surprised here.

“Generics” should mean that the compiler or interpreter will generate new code paths for a function or structure based on usage in the calling code.

If I call tragmorgify(int), tragmorgify(float), or tragmorgify(CustomNumberType), the expectation is that tragmorgify(T: IsANumber) tragmorgifies things that are number-like in the same way.

For a compiled language this usually means monomorphization, or generating a function for each occurring tuple of args types. For an interpreted language it usually means duck-typing.

This is not a bad language feature per se, but also not what engineers want from generics. I would never write code like your example. The pattern of explicit type-checking itself is a well-known codesmell.

There is not a good usecase for adding 2.0 to a float input but 1 to an integer input. That makes your function, which should advertise a contract about what it does, a liar ;)


There will be customers even though it is a useless feature tier.

Monetizing knowledge-work is nearly impossible if you want everyone to be rational about it. You gotta go for irrational customers like university and giant-org contracts, and that will happen here because of institutional inertia.


Fair. But there’s even an additional difference between snarky clickbait and “giving the exact opposite impression of the truth in a headline” ;)


I got a PIP because my manager mistakenly thought I was stealing equipment, and he just kept adding agile story points until I failed.

I was too pissed off to go to an ombudsman. It honestly didn’t occur to me. It just felt like “this guy hates me and wants me out.”

He called me two years after I was fired to inquire about missing equipment that I never had.


You made the mistake of allowing PIP to be invoked on you for a fake reason given by your manager, and you made the even more serious mistake of getting fired because you "stole equipment".

If you were innocent of that dangerous charge, you should have fought hard against it, and ensured the top management, HR head and Ombudsman became involved in the investigation.

Being fired for theft or such serious crimes, can and will leave a permanent black mark on your employment record with the HR, and it may get pulled up during BGV (Background Verification) invoked by a prospective employer evaluating your candidacy for hire, thus seriously affecting your career and life.

I think you should now reach out to that company HR and clear the conspiracy and black mark against you. Perhaps, you can hire a lawyer who deals with employment matters, so he or she can legally get the matter resolved in your favour.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: