It's 2026, you should not be using command prompt (or batch.) In powershell ls is a built in alias to get-childitem and has been for years, and in recent versions of windows you'd have to go out of your way to get a command prompt (you would have to open a powershell terminal and then run cmd.)
On one our linux machine filesystem became strange, probably because somebody mistyped `ls /bin` as `ln /bin`. I think docs say hardlinking folders is impossible or maybe /bin was a symlink.
I've had it on every Windows computer I used at work since forever now, and it is extremely useful to be able to use things like `sed` and `gawk` (and even `make`) from the command prompt
Underrated secondary option: git bash. Lower setup overhead than full WSL, although it is slower if you need to work on a lot of files or spawn a lot of processes.
Let me present you my favorite, how do you figure out dirname, basename and filename in batch script?
set filepath="C:\some path\having spaces.txt"
for /F "delims=" %%i in (%filepath%) do set dirname="%%~dpi"
for /F "delims=" %%i in (%filepath%) do set filename="%%~nxi"
for /F "delims=" %%i in (%filepath%) do set basename="%%~ni"
echo %dirname%
echo %filename%
echo %basename%
> Author's note: From here on, the content is AI-generated
Kudos to the author for their honesty in admitting AI use, but this killed my interest in reading this. If you can use AI to generate this list, so can anyone. Why would I want to read AI slop?
HN already discourages AI-generated comments. I hope we can extend that to include a prohibition on all AI-generated content.
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
If the author had also included a note explaining that he'd *reviewed* what the AI produced and checked it for correctness, I would be willing to trust the list. As it is, how do I know the `netstat` invocation is correct, and not an AI hallucination? I'll have to check it myself, obviating most of the usefulness of the list. The only reason such a list is useful is if you can trust it without checking.
Sure, humans make mistakes... but rarely, vanishingly rarely about commands they use often. Are you going to make a non-typo kind of mistake when typing `ls -l`? AI hallucinations don't happen all the time, but they happen so much more often than "vanishingly rarely".
That's why you can't just vibe-code something and expect it to work 100% correctly with no design flaws, you need to check the AI's output and correct its mistakes. Just yesterday I corrected a Claude-generated PR that my colleague had started, but hadn't had time to finish checking before he went on vacation. He'd caught most of its mistakes, but there was one unit test that showed that Claude had completely misunderstood how a couple of our services are intended to work together. The kind of mistake a human would never have made: a novice wouldn't have understood those services enough to use them in the first place, and an expert would have understood them and how they are supposed to work together.
You always, always, have to double-check the output of LLMs. Their error rate is quite low, thankfully, but on work of any significant size their error rate is pretty much never zero. So if you don't double-check them then you're likely to end up introducing more bugs than you're fixing in any given week, leading to a codebase whose quality is slowly getting worse.
I could've done better with research, but this post has been collecting dust in the drafts, so I decided to try my first (and last) time to finish the work I started a few months ago.
Not bad, but one big criticism, never do a 'kill -9' first, that will stop the program from cleaning up after itself if killed using -9.
Use one of these instead:
-TERM then wait, if not
-INT then wait, if not
-HUP then wait, if not
-ABRT
If you are sure all of these fail, then use -9 (-KILL). But assume the program has a major bug and try and find another program that will do the same task and use that instead.
Maybe this logic should be built into the "kill" command (or some other standard command). Given that this is the right way, it shouldn't be more tedious than the wrong way!
It could also monitor the target process and inform you immediately when it exits, saving you the trouble of using "ps" to confirm that the target is actually gone.
Different programs may take different amounts of time to cleanup and close. To know if a signal failed takes human judgment or heuristic. A program receiving a signal is even able to show a confirmation dialog for the user to save stuff, etc. before closing.
That's a valid point. Another example is SIGHUP, which will cause some programs to exit but other programs to reload their config file. In certain very specific cases, that could even cause harm.
So really what "kill" would be doing is automating a common procedure, which is different than taking responsibility for doing it correctly. It would need to be configurable.
I still think it would be a net benefit since right now incentives push people toward doing something the wrong way (even if they know better). But I can also see how it might give people a false sense of security or something along those lines.
It's not common. If `kill` on its own (which does just SIGTERM) doesn't work, you're already in "something wrong is happening" territory, which is why:
>>> Given that this is the right way, it shouldn't be more tedious than the wrong way!
is also the wrong way to think about this. Trying a sequence of signals is not so much "the right way" as it is "the best way to handle a wrong situation". The right way is just `kill` on it's own. SIGTERM should always suffice. If it doesn't to the user's satisfaction for a nonjustifiable reason, then you can just `kill -9`, but this should be rare.
Trying a sequence of SIGINT, SIGHUP, and SIGABRT is technically better than SIGKILL but not really important unless you also want to write a bug report about the program's signal handling or fix it yourself. About SIGINT and SIGHUP, if SIGTERM doesn't work, it's unlikely that SIGINT or SIGHUP would. Likely, it would only be through oversight and the execution of default handlers.
`kill -9` is just like `rm -rf`. I wouldn't suggest that `rm` automatically run with `-r` or `-f` when `rm` on its own didn't work, and I wouldn't call automatically trying those flags "the right way".
Lots of commandline tools will hold on to dear life except for the sigkill. I often have this with running background tasks which get one of their threads in an infinite loop or wait state.
Yes and no, with `find` I know I'm getting "live" results from the filesystem, whereas plocate (and s/locate) merely searches through a database updated god knows when, assuming it's even installed and the bulk of the files indexed.
Its very likely that its just because i have almost no experience with powershell meanwhile i have now ~4-5 years of dailying linux but i just find the powershell commands to be very cumbersome to use.
They are wayyy to long and descriptive instead of just 2-4 letters, there are 500 different commands for very specific uses instead of 10 tools that you can use and combine todo almost everything and if i recall correctly (my memory might trick me here tho) the errors are far less readable at a first glance and fill the entire terminal for simple things.
CMD meanwhile feels like bash.
Most of my issues with it are probably just skill issues tho since like i said i dont really use or know it alot so i am happy to be corrected :) I mean if every Windows Sysadmin tells me how great powershell is, i cant just assume that they all are wrong (Or maybe its just the only way todo something thats otherwise simple over the terminal on windows, idk)
The verbosity especially in cmdlet names kind of sucks but having everything be an object with properties and methods, vs having to chop up and parse and pipe text is quite nice. I haven't had the pleasure of being a linux admin professionally so I don't have much experience on the linux side.. but just like a really simple example of getting in interface's IP address.. Grabbing a property from get-netipaddress is easier/faster/simpler to me than chopping up text output from ifconfig.
This applies to errors of course, there are a number of properties for an error that you can look at (and use in scripts to handle errors) if the full output is too much or unclear.
Because powershell is weird and obtuse? Or because powershell works slightly different in the terminal va the powershell dev environment? Its a tool most of us use under duress rather than choice
I certainly won't argue that pwsh is even close to perfect, but...obtuse is just about the most unfitting description of powershell. It offers a level of structure and consistency that is - even with all its shortcomings - orders of magnitude above the wild west of the daily reality of the linux cli.
Just because it's the mess we are all intimately familiar with, doesn't make it less of a mess.
"Just because it's the mess we are all intimately familiar with, doesn't make it less of a mess."
I kinda feel like you could apply the statement more to powershell tho.
I just dont see how Remove-Item is superior to rm and thats just the first example that came to mind (Atleast there are aliases for most stuff afaik so i guess its not AS bad).
I also just googled and there seem to be 3-4 different commands (not including the aliases) that do EXACTLY the same thing, atleast the Microsoft article used 1:1 the same description for all of them.
rm only removes files and directories right? Remove-Item can be used for any powershell provider, such as environment variables, active directory, certificates, and registry. And of course you can implement your own providers that utilize *-item cmdlets. I don't know that i'd call either superior, or that i'd even say that they're equivalent. rm is a utility for removing files, remove-item is a little more than that.
> "Author's note: From here on, the content is AI-generated"
Ah, I see, googling the equivalent of "clear" was too much work and you had to get an LLM to do it for you. Well at least you were honest about it
What is even the point having this post when its literally a prompt
A great non-AI resource on this topic: https://ss64.com/
My most used windows command is, and will always be, `ls`.
Then I'm reminded that it's not a know file or directory.
It's been nearly 20 years since powershell came out.
And we had cygwin before that. First thing I always installed on a Windows box so I could use bash and all my favorite utilities.
And it still sucks
Cygwin was so much work but you’re still stuck in windows.
It's 2026, you should not be using command prompt (or batch.) In powershell ls is a built in alias to get-childitem and has been for years, and in recent versions of windows you'd have to go out of your way to get a command prompt (you would have to open a powershell terminal and then run cmd.)
On one our linux machine filesystem became strange, probably because somebody mistyped `ls /bin` as `ln /bin`. I think docs say hardlinking folders is impossible or maybe /bin was a symlink.
Same! Closely followed by 'cat' lol. 'type' just doesn't register in my brain
VMS also uses type to dump a file to stdout.
I understand that DEC TOPS 20 influenced CP/M and MS-DOS, so that could be the source for type.
https://en.wikipedia.org/wiki/TOPS-20
Edit: type has its own wiki, and TOPS-20 implemented it.
https://en.wikipedia.org/wiki/TYPE_(DOS_command)
Back before "type" we had "copy FILE CON".
Or you can just prepend `wsl` to the linux command you want to run; of course only if you have wsl setup.
https://learn.microsoft.com/en-us/windows/wsl/filesystems#ru...
Really? That's awesome
Can I just do a shout-out for UnxUtils [1]?
I've had it on every Windows computer I used at work since forever now, and it is extremely useful to be able to use things like `sed` and `gawk` (and even `make`) from the command prompt
[1] https://unxutils.sourceforge.net/
Yuck. Just install WSL and be done with it
Underrated secondary option: git bash. Lower setup overhead than full WSL, although it is slower if you need to work on a lot of files or spawn a lot of processes.
I guess you get git bash for free when you install git, which speaks legions about the pain of powershell
Also, Minc, MinC is not Cygwin. And, yes slower, but it might work even under XP.
findstr is an underappreciated command line tool. I use it a lot
Let me present you my favorite, how do you figure out dirname, basename and filename in batch script?
It is just as intuitive as one would expect.You could just one line it too:
> Author's note: From here on, the content is AI-generated
Kudos to the author for their honesty in admitting AI use, but this killed my interest in reading this. If you can use AI to generate this list, so can anyone. Why would I want to read AI slop?
HN already discourages AI-generated comments. I hope we can extend that to include a prohibition on all AI-generated content.
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
If the author had also included a note explaining that he'd *reviewed* what the AI produced and checked it for correctness, I would be willing to trust the list. As it is, how do I know the `netstat` invocation is correct, and not an AI hallucination? I'll have to check it myself, obviating most of the usefulness of the list. The only reason such a list is useful is if you can trust it without checking.
How would you know the invocation is correct when written by a human? Don’t humans make mistakes?
Sure, humans make mistakes... but rarely, vanishingly rarely about commands they use often. Are you going to make a non-typo kind of mistake when typing `ls -l`? AI hallucinations don't happen all the time, but they happen so much more often than "vanishingly rarely".
That's why you can't just vibe-code something and expect it to work 100% correctly with no design flaws, you need to check the AI's output and correct its mistakes. Just yesterday I corrected a Claude-generated PR that my colleague had started, but hadn't had time to finish checking before he went on vacation. He'd caught most of its mistakes, but there was one unit test that showed that Claude had completely misunderstood how a couple of our services are intended to work together. The kind of mistake a human would never have made: a novice wouldn't have understood those services enough to use them in the first place, and an expert would have understood them and how they are supposed to work together.
You always, always, have to double-check the output of LLMs. Their error rate is quite low, thankfully, but on work of any significant size their error rate is pretty much never zero. So if you don't double-check them then you're likely to end up introducing more bugs than you're fixing in any given week, leading to a codebase whose quality is slowly getting worse.
If I get that kind of content, my first reaction is to close it, it is kind of low effort content nowadays.
Unfortunely at work it isn't as easy with all the KPIs related to taking advantage of AI to "improve" our work.
I could've done better with research, but this post has been collecting dust in the drafts, so I decided to try my first (and last) time to finish the work I started a few months ago.
Why should you learn anything if you can just use AI to look it up? For fun is one reason.
Not bad, but one big criticism, never do a 'kill -9' first, that will stop the program from cleaning up after itself if killed using -9.
Use one of these instead:
If you are sure all of these fail, then use -9 (-KILL). But assume the program has a major bug and try and find another program that will do the same task and use that instead.Maybe this logic should be built into the "kill" command (or some other standard command). Given that this is the right way, it shouldn't be more tedious than the wrong way!
It could also monitor the target process and inform you immediately when it exits, saving you the trouble of using "ps" to confirm that the target is actually gone.
Different programs may take different amounts of time to cleanup and close. To know if a signal failed takes human judgment or heuristic. A program receiving a signal is even able to show a confirmation dialog for the user to save stuff, etc. before closing.
That's a valid point. Another example is SIGHUP, which will cause some programs to exit but other programs to reload their config file. In certain very specific cases, that could even cause harm.
So really what "kill" would be doing is automating a common procedure, which is different than taking responsibility for doing it correctly. It would need to be configurable.
I still think it would be a net benefit since right now incentives push people toward doing something the wrong way (even if they know better). But I can also see how it might give people a false sense of security or something along those lines.
> automating a common procedure
It's not common. If `kill` on its own (which does just SIGTERM) doesn't work, you're already in "something wrong is happening" territory, which is why:
>>> Given that this is the right way, it shouldn't be more tedious than the wrong way!
is also the wrong way to think about this. Trying a sequence of signals is not so much "the right way" as it is "the best way to handle a wrong situation". The right way is just `kill` on it's own. SIGTERM should always suffice. If it doesn't to the user's satisfaction for a nonjustifiable reason, then you can just `kill -9`, but this should be rare.
Trying a sequence of SIGINT, SIGHUP, and SIGABRT is technically better than SIGKILL but not really important unless you also want to write a bug report about the program's signal handling or fix it yourself. About SIGINT and SIGHUP, if SIGTERM doesn't work, it's unlikely that SIGINT or SIGHUP would. Likely, it would only be through oversight and the execution of default handlers.
`kill -9` is just like `rm -rf`. I wouldn't suggest that `rm` automatically run with `-r` or `-f` when `rm` on its own didn't work, and I wouldn't call automatically trying those flags "the right way".
Kill is not a command to kill processes, it is a misnomer. Kill is meant to send signals to processes.
How often does plain 'kill <pid>' not work, but some other signal other than SIGKILL works?
Usually the process is either working correctly and terminates when asked, or else not working correctly and needs to be KILLed.
It is possible to install a handler for most signals, and that handler can be configured to ignore the signal.
Signal 9 cannot be ignored.
I don't think of 9 as really being a signal to the process at all, more of an instruction to the OS kernel to terminate the process
Lots of commandline tools will hold on to dear life except for the sigkill. I often have this with running background tasks which get one of their threads in an infinite loop or wait state.
HUP is usually sent to daemons to instruct them to reinitialize and reread their configuration files.
Is it still passed when a terminal is disconnected? I understand a dial-up modem was involved in the original intended use.
Never use `kill -9`, instead refer to the signal directly. 9 is not always the same signal on all platforms.
On a modern OS, doesn’t the kernel (eventually) take care of the cleanup anyways?
This is article is likely LLM generated and it regurgitates as first go what the last resort should be. After seeing that command I stopped reading.
> Finding a specific file by name across the system
> Linux: find / -name "config.txt"
This is not how you find a file across the entire system, you use plocate for that. find would take ages to do what plocate does instantly
Yes and no, with `find` I know I'm getting "live" results from the filesystem, whereas plocate (and s/locate) merely searches through a database updated god knows when, assuming it's even installed and the bulk of the files indexed.
No. "Slower" is not the same as "different functionality".
In fact, "find" is guaranteed to be more correct. And more widely available.
Why would you use CMD when Powershell exists?
Its very likely that its just because i have almost no experience with powershell meanwhile i have now ~4-5 years of dailying linux but i just find the powershell commands to be very cumbersome to use. They are wayyy to long and descriptive instead of just 2-4 letters, there are 500 different commands for very specific uses instead of 10 tools that you can use and combine todo almost everything and if i recall correctly (my memory might trick me here tho) the errors are far less readable at a first glance and fill the entire terminal for simple things. CMD meanwhile feels like bash.
Most of my issues with it are probably just skill issues tho since like i said i dont really use or know it alot so i am happy to be corrected :) I mean if every Windows Sysadmin tells me how great powershell is, i cant just assume that they all are wrong (Or maybe its just the only way todo something thats otherwise simple over the terminal on windows, idk)
The verbosity especially in cmdlet names kind of sucks but having everything be an object with properties and methods, vs having to chop up and parse and pipe text is quite nice. I haven't had the pleasure of being a linux admin professionally so I don't have much experience on the linux side.. but just like a really simple example of getting in interface's IP address.. Grabbing a property from get-netipaddress is easier/faster/simpler to me than chopping up text output from ifconfig.
This applies to errors of course, there are a number of properties for an error that you can look at (and use in scripts to handle errors) if the full output is too much or unclear.
Because powershell is weird and obtuse? Or because powershell works slightly different in the terminal va the powershell dev environment? Its a tool most of us use under duress rather than choice
I certainly won't argue that pwsh is even close to perfect, but...obtuse is just about the most unfitting description of powershell. It offers a level of structure and consistency that is - even with all its shortcomings - orders of magnitude above the wild west of the daily reality of the linux cli.
Just because it's the mess we are all intimately familiar with, doesn't make it less of a mess.
"Just because it's the mess we are all intimately familiar with, doesn't make it less of a mess." I kinda feel like you could apply the statement more to powershell tho.
I just dont see how Remove-Item is superior to rm and thats just the first example that came to mind (Atleast there are aliases for most stuff afaik so i guess its not AS bad).
I also just googled and there seem to be 3-4 different commands (not including the aliases) that do EXACTLY the same thing, atleast the Microsoft article used 1:1 the same description for all of them.
rm only removes files and directories right? Remove-Item can be used for any powershell provider, such as environment variables, active directory, certificates, and registry. And of course you can implement your own providers that utilize *-item cmdlets. I don't know that i'd call either superior, or that i'd even say that they're equivalent. rm is a utility for removing files, remove-item is a little more than that.
"When in Rome, do as the Romans do."
I recently had a similar idea. https://github.com/Water-Run/Cmdset
ridiculous...
Why this entry is in the top 30?
Not ridiculous as long as there are still who need to learn. right? That said, I didn't see that coming
ok, but how do i get the only linux command i know?
ctrl+r
Works just fine in powershell. Avoid using command prompt and life is already a bit better
F7
which / where is the one that always trips me up.
> Windows: netstat -n -a | findstr "https" (//note the double quotes)
netstat works perfectly fine on linux as well. If you're looking for https connections it's certainly far more efficient than 'lsof'.
also if you use '-n' then you're not going to get service names translated, so that probably should be:
netstat -n -a | find "443"
traceroute vs tracert always catches me out.
less or at least more?
CTRL-ALT-DEL?
Can we do a satirical thread here please? I'm curious what HN can come up with :D
I'll start:
Not having to run a mess of Linux commands to install software.