First off, this is really impressive. After Opera and Microsoft dropped their engines and adopted Blink, and Mozilla gave up on Servo, I've been becoming increasingly worried for the future of the open web. Kudos for trying to get the matters in your own hands, and for getting this far with your project.
Now for the nit picking. From the FAQ:
> For example the HTTP code has no implementation of features that can be used for tracking (such as ETags).
True, ETags can be used for precise client tracking (just like a cookie with a unique client ID); but they are also useful for caching resources client-side, thus reducing data usage, server resources, client processing, etc.
Since the browser/backend is already using a whitelist approach, I would like to suggest optional support for ETags for websites that the user decides to trust.
Also, unless FixBrowser/FixProxy becomes relevant enough to show up on the pie chart besides Chrome, Firefox, and Safari, individual users can be easily fingerprinted based on e.g. IP ranges and the mere fact that the client behaves differently. This is an uphill battle, but I'm glad that efforts like this even exist.
I was not able to donate, PayPal again doing its shenanigans (my country of origin not supported to donate).
Anyway, what I typed there was: I read that you don't support JS intentionally. That's fine and dandy. If I were to create a browser from scratch, I'd probably do that.
However, what I'd really like to see is the ability to plug many scripting engines: Maybe you want to make V8 pluggable, or SpiderMonkey, or let's open the box: plug Python! That might enable the possibility to have a front-end stack that is HTML, CSS and Python (without the JS in the middle).
It would open a whole new spectrum of Web development, one that is not subjugated to the pitfalls of JS.
WebAssembly kind of opened that door, but a native interpreter would be good to have.
P.S.: I'm aware Brython exists, but it feels like a cheat to me.
Back in the late 1990s, multi-language support was part of the original design of the <script> tag. Microsoft’s market-leading browser defaulted to VBScript rather than JavaScript. But of course people wanted interoperability rather than writing separate scripts for IE and Netscape.
Python is ill-suited for browser scripting because it boldly claims to have “batteries included”, i.e. it has a sprawling standard library, and most of it is entirely incompatible with the browser environment’s sandboxing and async execution model.
So Python for browser scripting would be a limited subset. And if you go there, what’s really the point of writing programs in an incompatible version of a language whose sole reason to exist is the supposed ease-of-use…
> Python is ill-suited for browser scripting because it boldly claims to have “batteries included”, i.e. it has a sprawling standard library, and most of it is entirely incompatible with the browser environment’s sandboxing and async execution model.
Per a previous comment[0], Python was an example of my point, but I was thinking even more about any scripting language that employs pre-processors so the code inlining works.
> Back in the late 1990s, multi-language support was part of the original design of the <script> tag. Microsoft’s market-leading browser defaulted to VBScript rather than JavaScript.
That in the context of the browser wars back then. Today that war is kind of settled, still fighting to take down some Chrome's dominance of course.
> But of course people wanted interoperability rather than writing separate scripts for IE and Netscape.
But my point is that it would be a kind of start, JS is too dominant for the front-end community. If you don't know JS you're just dead in the water.
You have to inherently like JS to be an effective front-end developer. That's an unfair constraint.
WebAssembly kind of opened that door, but we are still in these early days.
You could actually use Perl and Python with IE 4 through Windows Script Host if you installed the versions from ActiveState (ActivePerl, ActivePython, and also ActiveTcl), which provided ActiveX scripting engines for those languages. I actually wrote some small Perl browser LAN apps using this for myself. It was a huge security hole in that anyone who install ActivePerl, Python, or Tcl could be rooted if they visited a web page with the appropriate malicious script tag, as these languages provide out-of-the-box support for file manipulation and other potentially dangerous actions.
You could also use the AS distros to write classic, pre-.NET ASP applications. I know of at least one startup that actually did this (ActivePerl + classic ASP + IIS on Windows NT/2000), or at least seemed to based on their job postings.
JavaScript is weird, but it was specifically made and has evolved entirely for making non blocking, snappy, event driven user interfaces.
Python was not.
Also, you’d end up breaking the very standards that make the web open. If websites only work on 1 browser because it’s the only one that supports Python, then you’ve just lost the open web.
That’s the whole idea around WASM. A standard compile target that’s designed for the sandboxes environment of websites.
Python was an example of my point, but I was thinking even more along the lines of any scripting language that employs pre-processors, so the code inlining works.
Much like PHP does with open and closing tags, in the early days of Web development, I remember doing websites with Dreamweaver that way.
This very approach seems to pre-determine a centralized internet, where the dynamic behaviour that you care about belongs on a short list of websites whose functionality can be implemented by hand...
I suppose my idealism is running hard into reality. Obviously, this project is feasible and does work for many sites (not YouTube or Netflix, due to lack of <video>, but does work for CNN and HN and acoup.blog and...). I want to live in a world where either this is wildly successful and everybody knocks it off, or a world where this is completely impractical because everyone is doing Cool Art Things and this simply would be impractical. But this world where it's practical, but unloved... do not like that.
You have netsurf which is a CSS renderer, better than dillo since it is written in C and not some computer language with a grotesquely and absurdely complex syntax/size.
Really cool, I think there are places where something like this could be really useful.
It could be cool to pair this with a SSR backend and package it into a Electron-like desktop app. You'd get basic UI, but it could be very lightweight. The biggest complaint about electron has long been memory usage. Could work great for kiosks too.
the author describes how their render-once approach lets them implement CSS in a simpler way since they don't need to retain information for arbitrary dynamic changes in the stylesheets and content
I guess this implies that rescaling the window, or rotating your phone, will not update the view. Then you'd have to reload the page. That trade-off seems okay to me.
Yeah, the layout would still resize but it could be non-ideal in some cases as it would be based on media queries for another width.
For the rotation I could process a second layout on the background and switch to it instantly if rotated. Similarly hover effects will be limited. Things affecting visibility of the blocks/layers should work (for menus), small adjustments of layouted text too, but anything that is more complicated won't be. It currently uses a hardcoded hover effect for links.
With all the SPA out there I find it hard to believe we are building js-less websites today … maybe it’s a browser made to navigate the past and small blogs ?
There is a plan for using CEF engine for a specific tab or website. The user would be able to make the website work with a single click and it would remember it. You would still have the advantage of a lightweight browser just in a lesser degree.
There's a few weird ones. I worked with a company that has what in my mind should be a static website, but it's not. It's headless Wordpress, with a Next.js frontend. It's just an informational websites, like what they do, contact information, services offered and a "blog", which is just one or two article published per month. The bloody thing is a single page app, routing is done in Javascript, rendering is Javascript... I don't understand it at all. It's a type of page that needs no Javascript at all, and yet it's built entirely as a Javascript frontend.
The opinionated approach feels restrictive to me. My best recommendation to avoid slowness, privacy violations, and other nasty things is to not include certain features as opposed to eliminating JavaScript.
For example if we know large SPA frameworks and/or slow websites require use of convention A then simply not include support for convention A. This is a fail by design approach instead of a blacklist approach.
Here are some things to block to radically increase performance:
* allow a quota of load time requests and then stop taking http requests until a person interacts with the page (or just break the page). If you set the quota at 10 then any pages with greater numbers of requests will just stop loading. That alone will eliminate 99% of spyware and dramatically shift user behavior.
* drop requests for JavaScript from different origins than the page will improve both performance and privacy
The biggest thing to help with privacy is to not support CORS. That will do more than eliminating JavaScript.
These things are still highly restrictive but much less so than a blacklist approach.
That completely misses the point. Privacy and performance advocates don’t care how hard life is for JavaScript developers. JavaScript developer convenience as the top priority results in the very slow privacy violating sites things like this browser exist to ignore.
>Planned support for systems in the near future: Linux GTK3/4
FLTK is better then GTK on linux.
Since version 1.4 FLTK supports HighDPI displays and Wayland https://www.fltk.org/articles.php?L1947
GTK3/4 and Qt5/6 are bloatware!
This was also my first thought, but looks like it's already designed to be toolkit-agnostic, like NetSurf. So it should be easy to port to fltk or anything else.
$ ls -1 fixgui_\*.c
fixgui_cocoa.c
fixgui_gtk.c
fixgui_haiku.c
fixgui_win32.c
This project, the approach contained within, the wording, license, the programming language used, the lack of a publicly accessible repository all have, in my opinion, a highly opinionated, “artisanal” approach (the reasoning behind which I’m not entirely able to comprehend) that seem to scratch the authors itch, but otherwise disregard the state of the web and what basic expectations other users have.
Regardless, good luck to the project. Would be interesting to see the end result.
To the author - there are certain social (and developer) expectations I would suggest you look into, e.g information about you (considering you are asking for donations; who am I donating to?) and a public repository people contribute code to. King of my own castle approach won’t really work here.
> that seem to scratch the authors itch, but otherwise disregard ... basic expectations other users have
Isn't that kind of the whole ethos of free software? The current capitalistic view that open source is a (unpaid) job producing a product seems... unsustainable
I understand the ethos of free software, but there’s a significant difference between a personal project with unique quirks and a public project seeking monetary and development contributions.
Deliberately going against commonly accepted practices—like not providing a public repository—can be counterproductive to the project.
For example, the ‘submit code changes via email’ approach comes across as ‘you can help, but I’ll privately decide if your help is good enough’ which might discourage potential contributors.
cool. i support stuff like this even if it isn't quite usable for me or, really, practical even if it was! but i like to know it's there and if it was reasonably complete i would for sure keep a stable version on my machine.
Wow. This is awesome in all the wrong ways. I can't decide if I hate it or love it.
It uses its own language, it's really written from scratch to such an extent that it uses direct C API for Cocoa on macOS instead of the usual approach of just using a couple of Objective C files. The code is not in a version control, and I have not seen a single comment apart from the copyright headers.
Using of C API for Cocoa is for compatibility reasons, using ObjC API is fully supported by Apple so it's not an hack in any way. Normally the used SDK dictates the minimum version of the OS and it's harder to support multiple versions. With using ObjC API directly any MacOS from 10.6 up can be supported easily.
I've used similar approach for Haiku, it uses C++, this is more hacky but given their strong stance on binary compatibility it is fine and worked well across multiple releases without any changes so it's clearly a valid approach :) The reason is that I had big issues to get a C++ cross-compiler working for Haiku - supporting many platforms is not an easy task when you want to provide prebuilt binaries from a single VM.
As stated in other comment, I'm using Monotone VCS for version control and have other reasons as well.
As for the code comments, generally I don't need them, the code tends to be self-describing. And when I need them it is for describing something intricate which I generally find out later when I need it, at that point I document it because it's clearly needed, but also at that point I know exactly what to document. Better to avoid intricate things though.
On the other hand I should focus more on the architectural documentation that I've partially written but needs to be improved and expanded.
Since this is using Cocoa and one of the targets is MacOS, have you considered using the GNUstep framework {instead of|in addition to} gtk* for the Linux side of your browser project?
I'm targetting GTK because it's the most common toolkit and it is easy to interoperate from other languages. I will also attempt to do a Qt/KDE support at some point in the future, but C++ is more complicated with binary compatibility. GNUstep seems like something that not many users have installed.
Haiku impressed me so much that even when I'm unlikely to use it as an user (mostly because I have already usable setup that I like), I've found it very good and polished. So I've decided that all my software will get 1st class support for Haiku :)
> it uses direct C API for Cocoa on macOS instead of the usual approach of just using a couple of Objective C files
This is how almost all other languages implement Cocoa support, btw. With a few weird exceptions like Apple's Objective-C++ compiler, most everyone implements FFI by chaining together LanguageA -> C -> LanguageB.
Otherwise you'd have to build an N-to-N matrix of cross-compilers, and it becomes a whole mess.
Careful, if you are to get something real-life-working on the major javascript-walled sites out there, Big Tech will try to make your life hell (and that includes shadow-paying hackers to destroy your software).
Since, in the end, the benefit of the crime is all for those Big Tech Web engines, the requirement of the proof is inverted (or those hackers are beyond stu..., unlikely). And Big Tech is proven guilty of anti-competitive practices all the time, they are recurring offenders (unless you have been living in a cave for decades).
Basically, untill proven otherwise the recurring offenders which are the sole benefit of this crime are guilty.
You don't say that it's an experiment or just for learning how to write a project. You suggest it will be competitive in simplicity or speed. I find this hard to believe.
A project based on servo would be more credible. Sure, those developers are building JavaScript engines. However, their browsers are highly modular and you could do a build without JavaScript, a lot more easily than with the major browser engines.
In addition, it uses your own programming language, and there is no source repo.
Edit: I see the purpose of it better now. It would perform very well, but not compared to other browsers that had the same modifications. However, since other browsers don't have the same modifications, it would work comparatively well for the sites it would work with.
If you wanted to make it run fast with while supporting a lot of sites, and still be simple, I think using Servo would be a quicker path. They've already solved a lot of layout problems.
Using a full browser engine with JS disabled is not the same as you won't get the architectural benefits of not having to support dynamic changes by JS at all.
It allows for a much simpler one-way processing from one stage to another. In comparison a full browser must maintain data structures for fast dynamic changes by JS, making it much more complex. I've written about it in a more detail in the About section.
Embedding of a full browser engine as an option is planned, I've chosen CEF as the most suitable choice. It could be used for specific tabs or websites (eg. applications) while being integrated with the rest of the browser. However CEF is not very portable so it won't be available for all planned systems.
This way you would use it only for websites/applications that need it while saving resources when browsing the rest of the websites.
It's mostly for practical reasons. I use Monotone for VCS which is not an active project (but a good SW regardless). Then there is the issue with having additional stuff in the repository that is meaningful just for me but I don't intend to release it (or not in the current form).
First off, this is really impressive. After Opera and Microsoft dropped their engines and adopted Blink, and Mozilla gave up on Servo, I've been becoming increasingly worried for the future of the open web. Kudos for trying to get the matters in your own hands, and for getting this far with your project.
Now for the nit picking. From the FAQ:
> For example the HTTP code has no implementation of features that can be used for tracking (such as ETags).
True, ETags can be used for precise client tracking (just like a cookie with a unique client ID); but they are also useful for caching resources client-side, thus reducing data usage, server resources, client processing, etc.
Since the browser/backend is already using a whitelist approach, I would like to suggest optional support for ETags for websites that the user decides to trust.
Also, unless FixBrowser/FixProxy becomes relevant enough to show up on the pie chart besides Chrome, Firefox, and Safari, individual users can be easily fingerprinted based on e.g. IP ranges and the mere fact that the client behaves differently. This is an uphill battle, but I'm glad that efforts like this even exist.
I was not able to donate, PayPal again doing its shenanigans (my country of origin not supported to donate).
Anyway, what I typed there was: I read that you don't support JS intentionally. That's fine and dandy. If I were to create a browser from scratch, I'd probably do that.
However, what I'd really like to see is the ability to plug many scripting engines: Maybe you want to make V8 pluggable, or SpiderMonkey, or let's open the box: plug Python! That might enable the possibility to have a front-end stack that is HTML, CSS and Python (without the JS in the middle).
It would open a whole new spectrum of Web development, one that is not subjugated to the pitfalls of JS.
WebAssembly kind of opened that door, but a native interpreter would be good to have.
P.S.: I'm aware Brython exists, but it feels like a cheat to me.
Back in the late 1990s, multi-language support was part of the original design of the <script> tag. Microsoft’s market-leading browser defaulted to VBScript rather than JavaScript. But of course people wanted interoperability rather than writing separate scripts for IE and Netscape.
Python is ill-suited for browser scripting because it boldly claims to have “batteries included”, i.e. it has a sprawling standard library, and most of it is entirely incompatible with the browser environment’s sandboxing and async execution model.
So Python for browser scripting would be a limited subset. And if you go there, what’s really the point of writing programs in an incompatible version of a language whose sole reason to exist is the supposed ease-of-use…
> Python is ill-suited for browser scripting because it boldly claims to have “batteries included”, i.e. it has a sprawling standard library, and most of it is entirely incompatible with the browser environment’s sandboxing and async execution model.
Per a previous comment[0], Python was an example of my point, but I was thinking even more about any scripting language that employs pre-processors so the code inlining works.
> Back in the late 1990s, multi-language support was part of the original design of the <script> tag. Microsoft’s market-leading browser defaulted to VBScript rather than JavaScript.
That in the context of the browser wars back then. Today that war is kind of settled, still fighting to take down some Chrome's dominance of course.
> But of course people wanted interoperability rather than writing separate scripts for IE and Netscape.
But my point is that it would be a kind of start, JS is too dominant for the front-end community. If you don't know JS you're just dead in the water.
You have to inherently like JS to be an effective front-end developer. That's an unfair constraint.
WebAssembly kind of opened that door, but we are still in these early days.
--
[0]: https://news.ycombinator.com/item?id=42508950
>Python is ill-suited for browser scripting because it boldly claims to have “batteries included”
And javascript simply downloads its own batteries. "Only on first visit, I swear".
could you explain what you mean with batteries? Preloaded bibs? thanks
Browser script integration was close to being TCL but JavaScript won or what was called LiveScript from NetScape.
You could actually use Perl and Python with IE 4 through Windows Script Host if you installed the versions from ActiveState (ActivePerl, ActivePython, and also ActiveTcl), which provided ActiveX scripting engines for those languages. I actually wrote some small Perl browser LAN apps using this for myself. It was a huge security hole in that anyone who install ActivePerl, Python, or Tcl could be rooted if they visited a web page with the appropriate malicious script tag, as these languages provide out-of-the-box support for file manipulation and other potentially dangerous actions.
You could also use the AS distros to write classic, pre-.NET ASP applications. I know of at least one startup that actually did this (ActivePerl + classic ASP + IIS on Windows NT/2000), or at least seemed to based on their job postings.
Honestly that sounds terrible for the web.
JavaScript is weird, but it was specifically made and has evolved entirely for making non blocking, snappy, event driven user interfaces.
Python was not.
Also, you’d end up breaking the very standards that make the web open. If websites only work on 1 browser because it’s the only one that supports Python, then you’ve just lost the open web.
That’s the whole idea around WASM. A standard compile target that’s designed for the sandboxes environment of websites.
Python was an example of my point, but I was thinking even more along the lines of any scripting language that employs pre-processors, so the code inlining works.
Much like PHP does with open and closing tags, in the early days of Web development, I remember doing websites with Dreamweaver that way.
How'd you feel about a client-side PHP subset?
This very approach seems to pre-determine a centralized internet, where the dynamic behaviour that you care about belongs on a short list of websites whose functionality can be implemented by hand...
I suppose my idealism is running hard into reality. Obviously, this project is feasible and does work for many sites (not YouTube or Netflix, due to lack of <video>, but does work for CNN and HN and acoup.blog and...). I want to live in a world where either this is wildly successful and everybody knocks it off, or a world where this is completely impractical because everyone is doing Cool Art Things and this simply would be impractical. But this world where it's practical, but unloved... do not like that.
Huh? We had this exact thing with cgi scripts and html 4.1, and dynamic behavior that we cared about was on a lot of websites.
this is awesome - thank you! I look forward to trying it!
I wonder if you could apply for some funding from https://nlnet.nl/ ?
Thanks for sharing! I would love to try it with MacOS support.
Also, in case helpful, a few typos we caught: https://triplechecker.com/s/493676/fixbrowser.org
Thanks, fixed.
Neat tool! Will try it.
Thanks!
Thanks, author.
It must take courage to disclose your product to HN.
We need to return that courage with an attempt.
First, the evaluation. Then constructive comments.
This is awesome, like a new Dillo!
Thank you so much for making this project.
I'll be adding it to my test suite.
Wow, 25 years old! https://dillo-browser.github.io/
And still under active development! Looks like they had two releases this year.
You have netsurf which is a CSS renderer, better than dillo since it is written in C and not some computer language with a grotesquely and absurdely complex syntax/size.
Thank you for reminding me to give NetSurf another try. No package in Ubuntu for some reason, but relatively easy to compile from source.
Hot tip: don’t include a field for requests on your donation form!
Good point, I've renamed it suggestions unless someone else has better wording for it :)
Really cool, I think there are places where something like this could be really useful.
It could be cool to pair this with a SSR backend and package it into a Electron-like desktop app. You'd get basic UI, but it could be very lightweight. The biggest complaint about electron has long been memory usage. Could work great for kiosks too.
>At some point I've realized that much of the complexity and resource requirements of web browsers comes from JavaScript.
I am wondering how much of this is true?
Indeed, CSS is so massive it has long been a collection of standards. Good luck implementing all that.
the author describes how their render-once approach lets them implement CSS in a simpler way since they don't need to retain information for arbitrary dynamic changes in the stylesheets and content
they also probably don't implement most of CSS
I guess this implies that rescaling the window, or rotating your phone, will not update the view. Then you'd have to reload the page. That trade-off seems okay to me.
I believe Netscape worked like this back in the day. (ETA: as in, resizing the window would reload the page.)
Constrained features can work great in certain niches, imagine using this in a kiosk (where resizing isn't possible).
Yeah, the layout would still resize but it could be non-ideal in some cases as it would be based on media queries for another width.
For the rotation I could process a second layout on the background and switch to it instantly if rotated. Similarly hover effects will be limited. Things affecting visibility of the blocks/layers should work (for menus), small adjustments of layouted text too, but anything that is more complicated won't be. It currently uses a hardcoded hover effect for links.
With all the SPA out there I find it hard to believe we are building js-less websites today … maybe it’s a browser made to navigate the past and small blogs ?
A lot of apparent SPAs also offer a simple server side rendered version of the website for search engines and AI agents.
You can for example read medium articles just fine with w3m.
There is a plan for using CEF engine for a specific tab or website. The user would be able to make the website work with a single click and it would remember it. You would still have the advantage of a lightweight browser just in a lesser degree.
In my experience, missing out on SPAs is not a particularly extreme hardship.
There's a few weird ones. I worked with a company that has what in my mind should be a static website, but it's not. It's headless Wordpress, with a Next.js frontend. It's just an informational websites, like what they do, contact information, services offered and a "blog", which is just one or two article published per month. The bloody thing is a single page app, routing is done in Javascript, rendering is Javascript... I don't understand it at all. It's a type of page that needs no Javascript at all, and yet it's built entirely as a Javascript frontend.
For me it's banks and maps.
I mean, as long as you don’t mind not being able to access the vast majority of published content, news and social media, that is.
I think a lot can work this way. Something is intentionally webapps, like google, those won't work, yes.
The opinionated approach feels restrictive to me. My best recommendation to avoid slowness, privacy violations, and other nasty things is to not include certain features as opposed to eliminating JavaScript.
For example if we know large SPA frameworks and/or slow websites require use of convention A then simply not include support for convention A. This is a fail by design approach instead of a blacklist approach.
Here are some things to block to radically increase performance:
* string parsing: innerHTML, querySelectors, console.log
* allow a quota of load time requests and then stop taking http requests until a person interacts with the page (or just break the page). If you set the quota at 10 then any pages with greater numbers of requests will just stop loading. That alone will eliminate 99% of spyware and dramatically shift user behavior.
* drop requests for JavaScript from different origins than the page will improve both performance and privacy
The biggest thing to help with privacy is to not support CORS. That will do more than eliminating JavaScript.
These things are still highly restrictive but much less so than a blacklist approach.
> string parsing: innerHTML, querySelectors, console.log
I don't have numbers, but I doubt an in-browser JS implementation without these APIs would be useful on many websites. Even HN uses innerHTML.
Feels much harder to both implement and debug, though.
That completely misses the point. Privacy and performance advocates don’t care how hard life is for JavaScript developers. JavaScript developer convenience as the top priority results in the very slow privacy violating sites things like this browser exist to ignore.
It's a whitelist.
>Planned support for systems in the near future: Linux GTK3/4
FLTK is better then GTK on linux. Since version 1.4 FLTK supports HighDPI displays and Wayland https://www.fltk.org/articles.php?L1947 GTK3/4 and Qt5/6 are bloatware!
This was also my first thought, but looks like it's already designed to be toolkit-agnostic, like NetSurf. So it should be easy to port to fltk or anything else.
FLTK is c++, then a definitive nono?
Look at FLTK's source code. It's a minimal set of C++. Like C with classes.
Then it is c++, then a definitive nono.
Plain and simple C99+ port?
>Plain and simple C99+ port?
C89 bindings https://github.com/MoAlyousef/cfltk
>Then it is c++, then a definitive nono.
Language with OOP is good for GUI!
This project, the approach contained within, the wording, license, the programming language used, the lack of a publicly accessible repository all have, in my opinion, a highly opinionated, “artisanal” approach (the reasoning behind which I’m not entirely able to comprehend) that seem to scratch the authors itch, but otherwise disregard the state of the web and what basic expectations other users have.
Regardless, good luck to the project. Would be interesting to see the end result.
To the author - there are certain social (and developer) expectations I would suggest you look into, e.g information about you (considering you are asking for donations; who am I donating to?) and a public repository people contribute code to. King of my own castle approach won’t really work here.
What do you need a publicly accessible repository for?
> that seem to scratch the authors itch, but otherwise disregard ... basic expectations other users have
Isn't that kind of the whole ethos of free software? The current capitalistic view that open source is a (unpaid) job producing a product seems... unsustainable
I understand the ethos of free software, but there’s a significant difference between a personal project with unique quirks and a public project seeking monetary and development contributions.
Deliberately going against commonly accepted practices—like not providing a public repository—can be counterproductive to the project.
For example, the ‘submit code changes via email’ approach comes across as ‘you can help, but I’ll privately decide if your help is good enough’ which might discourage potential contributors.
>you can help, but I’ll privately decide if your help is good enough
That's literally how opensource works.
the current capitalistic view is that open source is a way for corporations to cooperate and it seems be working fairly well
cool. i support stuff like this even if it isn't quite usable for me or, really, practical even if it was! but i like to know it's there and if it was reasonably complete i would for sure keep a stable version on my machine.
Really cool project! Maybe this is a bit nitpicky, but the paragraph on the front page is a bit wonky. Some words are missing
Wow. This is awesome in all the wrong ways. I can't decide if I hate it or love it.
It uses its own language, it's really written from scratch to such an extent that it uses direct C API for Cocoa on macOS instead of the usual approach of just using a couple of Objective C files. The code is not in a version control, and I have not seen a single comment apart from the copyright headers.
Haha I like this take.
Using of C API for Cocoa is for compatibility reasons, using ObjC API is fully supported by Apple so it's not an hack in any way. Normally the used SDK dictates the minimum version of the OS and it's harder to support multiple versions. With using ObjC API directly any MacOS from 10.6 up can be supported easily.
I've used similar approach for Haiku, it uses C++, this is more hacky but given their strong stance on binary compatibility it is fine and worked well across multiple releases without any changes so it's clearly a valid approach :) The reason is that I had big issues to get a C++ cross-compiler working for Haiku - supporting many platforms is not an easy task when you want to provide prebuilt binaries from a single VM.
As stated in other comment, I'm using Monotone VCS for version control and have other reasons as well.
As for the code comments, generally I don't need them, the code tends to be self-describing. And when I need them it is for describing something intricate which I generally find out later when I need it, at that point I document it because it's clearly needed, but also at that point I know exactly what to document. Better to avoid intricate things though.
On the other hand I should focus more on the architectural documentation that I've partially written but needs to be improved and expanded.
Since this is using Cocoa and one of the targets is MacOS, have you considered using the GNUstep framework {instead of|in addition to} gtk* for the Linux side of your browser project?
I'm targetting GTK because it's the most common toolkit and it is easy to interoperate from other languages. I will also attempt to do a Qt/KDE support at some point in the future, but C++ is more complicated with binary compatibility. GNUstep seems like something that not many users have installed.
It's good to hear you are supporting Haiku. Are you around on the Haiku forum or IRC?
Yeah, I'm on the Haiku IRC.
Haiku impressed me so much that even when I'm unlikely to use it as an user (mostly because I have already usable setup that I like), I've found it very good and polished. So I've decided that all my software will get 1st class support for Haiku :)
> I've decided that all my software will get 1st class support for Haiku
That’s seriously awesome. Kudos :)
> it uses direct C API for Cocoa on macOS instead of the usual approach of just using a couple of Objective C files
This is how almost all other languages implement Cocoa support, btw. With a few weird exceptions like Apple's Objective-C++ compiler, most everyone implements FFI by chaining together LanguageA -> C -> LanguageB.
Otherwise you'd have to build an N-to-N matrix of cross-compilers, and it becomes a whole mess.
No screenshot?
There are several on the about page.
https://www.fixbrowser.org/about
Careful, if you are to get something real-life-working on the major javascript-walled sites out there, Big Tech will try to make your life hell (and that includes shadow-paying hackers to destroy your software).
In such extreme cases a possible solution is to use this approach: https://www.fixbrowser.org/faq#gatekeepers
Any examples of this happening? If so, id really like to read about it
Since, in the end, the benefit of the crime is all for those Big Tech Web engines, the requirement of the proof is inverted (or those hackers are beyond stu..., unlikely). And Big Tech is proven guilty of anti-competitive practices all the time, they are recurring offenders (unless you have been living in a cave for decades).
Basically, untill proven otherwise the recurring offenders which are the sole benefit of this crime are guilty.
You don't say that it's an experiment or just for learning how to write a project. You suggest it will be competitive in simplicity or speed. I find this hard to believe.
A project based on servo would be more credible. Sure, those developers are building JavaScript engines. However, their browsers are highly modular and you could do a build without JavaScript, a lot more easily than with the major browser engines.
In addition, it uses your own programming language, and there is no source repo.
Edit: I see the purpose of it better now. It would perform very well, but not compared to other browsers that had the same modifications. However, since other browsers don't have the same modifications, it would work comparatively well for the sites it would work with.
If you wanted to make it run fast with while supporting a lot of sites, and still be simple, I think using Servo would be a quicker path. They've already solved a lot of layout problems.
Good luck with it.
Using a full browser engine with JS disabled is not the same as you won't get the architectural benefits of not having to support dynamic changes by JS at all.
It allows for a much simpler one-way processing from one stage to another. In comparison a full browser must maintain data structures for fast dynamic changes by JS, making it much more complex. I've written about it in a more detail in the About section.
Embedding of a full browser engine as an option is planned, I've chosen CEF as the most suitable choice. It could be used for specific tabs or websites (eg. applications) while being integrated with the rest of the browser. However CEF is not very portable so it won't be available for all planned systems.
This way you would use it only for websites/applications that need it while saving resources when browsing the rest of the websites.
Idea. Wasm packaging and then running FixBrowser embedded in a webapp. Then use any browser with that webview providing safe whitelisted browsing.
> there is no source repo.
However, they do provide a zip file containing source.
At least it’s open source although for some reason they are choosing to not use a version control system.
It's mostly for practical reasons. I use Monotone for VCS which is not an active project (but a good SW regardless). Then there is the issue with having additional stuff in the repository that is meaningful just for me but I don't intend to release it (or not in the current form).
Does this come bundled in TempleOS?
Please tell me you are doing this all by yourself, OP.