A Simple and Scalable Architecture for Your Next To-Do App

I proposed in one of my LinkedIn posts that personal projects involving CRUD operations have now become somewhat obsolete and don’t teach significant skills. I suggested that if CRUD projects adopt a client-server architecture, it would spice things up and also teach a lot about decoupling and scalable application development. In this post, I am proposing a simple client-server architecture for your personal project.


The Architecture at a Glance

This architecture consists of five main components:

  1. Client Application (Frontend) - Let’s say React.js.
  2. Reverse Proxy - Nginx for securing and balancing incoming HTTP requests toward front-end and back-end services.
  3. Application Server (Backend) - Flask API running on Gunicorn to handle concurrent requests.
  4. Database - PostgreSQL for data persistence, backup, and sync.
  5. Hosting - This setup is designed to be hosted on cloud virtual instances like AWS EC2 or Google Cloud Platform Compute Engine.

How it Works

1. Client Side:

  • The user interacts with the client application, which can be built using React.js or any other framework. I believe that the front end is just an implementation detail and can reside on multiple platforms. The frontend only handles the UI logic and user-generated data.
  • When the user interacts with the UI (e.g., clicking a button to add a task), the client captures these actions as events and sends them as API calls to the backend via HTTP requests.

2. Nginx as a Reverse Proxy:

  • All incoming requests from the client first pass through Nginx, which acts as a reverse proxy.
  • Nginx forwards the API requests to the Flask server (running on Gunicorn), ensuring that the backend is never directly exposed to the client. This also helps with load balancing by distributing traffic across multiple backend instances if necessary.

3. Flask REST API on Gunicorn:

  • The core of the application is the Flask REST API. This API receives the API requests from the frontend, processes the logic (e.g., creating or updating an item), and interacts with the PostgreSQL database to store or retrieve the necessary data.
  • Gunicorn, the application server, ensures that multiple requests can be handled concurrently, improving the overall performance of the application.

4. PostgreSQL Database:

  • The PostgreSQL database stores all the persistent data, such as the items created by the user.
  • Each API request that involves adding, updating, or deleting an item interacts with this database, ensuring that all changes are stored reliably.

5. Backup Database:

  • For added reliability, a backup database is often used. This database stores a copy of the main PostgreSQL database, ensuring that if any issues arise, the data is safe and can be restored quickly.
  • This ensures that even if the primary database goes down, your data will remain intact.

6. Cloud Hosting (AWS/GCP):

  • The entire architecture is hosted on cloud platforms like AWS EC2 or GCP Compute Engine, making it highly accessible and scalable. This setup allows you to expand your application as needed, whether that involves adding more frontend or backend instances to handle increased traffic or simply ensuring your app is accessible from any device. However, infrastructure scaling is not part of this architecture.

Why This Architecture Works Well

1. Separation of Concerns:

  • In this architecture, the frontend and backend are completely decoupled. The frontend handles the user interface (UI) and communicates with the backend through secure REST APIs.
  • This decoupling allows you to work on either the UI or backend independently without breaking the other. For example, you can update your React frontend while keeping the Flask API unchanged, or vice versa.

2. Security with Nginx:

  • One of the key security advantages is that the client never interacts directly with the backend. Instead, user events (like adding new data) are captured by the frontend and sent as API requests.
  • Nginx, acting as a reverse proxy, sits between the client and the Flask server, forwarding these API calls securely. This setup reduces the server's exposure to the public internet, ensuring a more secure environment for your backend services.

3. Gunicorn for Load Balancing:

  • Gunicorn is the WSGI server that powers the Flask app, ensuring that multiple API requests can be processed concurrently. This means your backend can handle more traffic, making the app scalable as user demand grows.
  • Gunicorn efficiently manages requests, improving the app's ability to scale without slowing down under load.

4. Data Reliability with a Backup DB:

  • While not essential for every app, adding a Backup Database ensures that you don’t lose important data if something goes wrong with your main database. This PostgreSQL backup can sync automatically with the primary database, providing peace of mind in the event of failure.

5. Cloud Hosting for Flexibility:

  • If you decide to turn your to-do app into a mobile or desktop app later, you can simply connect to the same backend without making major changes.
  • Additionally, cloud services often come with built-in features like autoscaling, load balancing, and security options to further enhance your app’s performance and reliability at a later stage.

Whether you're building your first to-do app or a more complex project, this client-server architecture offers a strong foundation. Its modular design allows for easy development and updates, its security features keep your data and server safe, and its cloud-hosted flexibility enables future expansion. With a clear separation of concerns, a scalable backend, and robust data handling, you can confidently build apps that are both simple and powerful.

The Winter Arc

The Winter Arc: A Season of Self-Development

The Winter Arc is a focused, three-month period of intentional self-development and personal growth. This concept draws inspiration from nature’s winter season—a time when the outside world appears dormant, but underneath, the seeds of future growth are quietly germinating. During the Winter Arc, you dedicate yourself to improving various areas of life, such as relationships, personality, finances, spirituality, or career. It’s a time to retreat, reflect, and hustle for your own well-being, setting the stage for a more empowered future.

In this arc, the emphasis is on internal progress and discipline. It’s about focusing solely on yourself, temporarily cutting out external distractions, and pushing your boundaries to grow in meaningful ways. While the world continues at its usual pace, you are consciously stepping into a period of self-reliance, introspection, and concentrated effort.

The Philosophy Behind the Winter Arc

The Winter Arc is rooted in the idea that growth requires periods of withdrawal and deep focus. Just as trees shed their leaves to conserve energy during winter, we too need times when we pull away from the noise of the outside world to focus inward. This is not about isolating yourself entirely, but about prioritizing your own development over external influences.

In our fast-paced, hyper-connected world, it’s easy to get distracted by the constant flow of information and social expectations. The Winter Arc is an opportunity to cut through that noise and commit to a period of deliberate and mindful self-improvement. By focusing exclusively on your personal journey for a set period, you can create lasting changes that carry into the new year and beyond.

When Does the Winter Arc Take Place?

The Winter Arc typically spans the last three months of the year—October, November, and December. However, for those who wish to start earlier, the arc can begin in September. The end date should always remain within the same calendar year. Importantly, the Winter Arc must be contained to three months—no extensions allowed. This limitation reinforces the idea that growth doesn’t need indefinite time, but focused effort within a finite period.

During this time, you’ll have to make choices that reflect your commitment to your goals. This could mean declining social engagements, reducing screen time, or saying no to distractions that pull you away from your purpose. The Arc's time-bound nature gives it a sense of urgency, making each day an opportunity to move closer to the person you want to become.

Rules of the Winter Arc

  1. Don’t Talk About the Winter Arc
    The first rule of the Winter Arc is that you keep it private. This is your personal journey, meant for introspection and action, not for discussion or external validation. By keeping your focus inward, you preserve the sanctity of your arc, free from outside opinions and influences.

  2. No Goal Disclosure
    Avoid the temptation to disclose your specific goals and objectives to others or on social media. Sharing your ambitions can sometimes lead to premature validation, which may reduce your motivation to follow through. Your goals are personal, and the satisfaction of achieving them should come from within.

  3. Avoid Perfectionism
    While the Winter Arc is about growth, it’s essential to avoid perfectionism. Perfectionism can lead to procrastination, fear of failure, and unnecessary pressure. Focus on making consistent progress, even if it’s imperfect. Progress, no matter how small, is better than stagnation.

  4. Limit Dopamine Rushes
    In a world full of instant gratification, it’s crucial to avoid activities that provide a quick dopamine hit but don’t contribute to your long-term goals. Reduce time spent on social media, video games, or binge-watching shows. Instead, channel that energy into productive habits like reading, exercising, or skill-building.

  5. Daily Progress Logging
    One of the most important rules of the Winter Arc is to track your progress every day. Keeping a daily log not only holds you accountable but also serves as a visual reminder of how far you’ve come. Whether you use a journal, an app, or a simple checklist, this habit will help maintain your momentum.

  6. No Extensions
    The Winter Arc is strictly limited to three months. Extensions are not allowed, ensuring that the arc maintains a sense of urgency. The limitation forces you to make the most of the time you have and pushes you to work with focus and intention. Once the arc is over, you can reflect on your progress and set new goals for the next chapter of your life.

The Power of Focused Effort

The Winter Arc is an exercise in concentrated effort. By limiting your timeline and removing distractions, you give yourself the gift of focus—something that’s often hard to come by in today’s world. Studies have shown that the ability to focus deeply for extended periods can lead to higher productivity, better problem-solving skills, and more meaningful progress in life.

This focused effort can have lasting effects. While the Winter Arc only lasts three months, the habits, mindset, and personal growth you cultivate during this time can extend well beyond the arc’s end. In fact, these three months could serve as a powerful launching pad for the year ahead, setting the tone for your future success.

Goal-Setting Template

When setting goals for your Winter Arc, ensure they follow the SMART criteria. This will make your goals clear, actionable, and achievable within the arc’s time frame.

Goal Name:

  1. Specific – What exactly do you want to accomplish? A clear, detailed goal gives you direction.
  2. Measurable – How will you measure your progress? Quantifiable milestones help you track your advancement.
  3. Achievable – Is this goal realistic within the three-month period? Stretch yourself, but make sure it’s something you can realistically achieve.
  4. Relevant – Does this goal align with your broader life goals? Ensure it has meaning and significance in the larger context of your personal development.
  5. Time-Bound – What is your deadline? In this case, the end of the Winter Arc is your natural deadline, but consider breaking down larger goals into smaller time-specific milestones along the way.

How to Make the Most of Your Winter Arc

To maximize your Winter Arc, consider these strategies:

  1. Create a Routine – Establish a daily or weekly routine that supports your goals. Consistency is key to making progress.
  2. Prioritize Self-Care – While the Winter Arc is about self-improvement, it’s also important to rest and recharge. Make sure to get enough sleep, eat well, and practice mindfulness to maintain balance.

  3. Reflect Regularly – Set aside time each week to reflect on your progress. What’s working? What needs adjustment? Reflection helps you stay on track and make necessary changes as you go.

  4. Celebrate Small Wins – Don’t wait until the end of the arc to celebrate your success. Acknowledge and reward yourself for the small victories along the way. This will keep you motivated and remind you that progress is being made.

  5. Stay Flexible – While discipline is crucial, it’s also important to be adaptable. If you find that a goal is no longer serving you or is unattainable within the arc’s timeframe, don’t be afraid to pivot. Adjustments are a natural part of the process.

Conclusion: Embrace the Winter Arc

The Winter Arc is an opportunity to step into a season of focused growth, free from distractions and external pressures. By dedicating these three months to personal development, you can build a foundation for long-lasting change. The rules of the arc are simple but powerful—keep your focus inward, avoid perfectionism, and work consistently toward your goals.

Embrace the quiet intensity of the Winter Arc. By the time it’s over, you’ll have not only improved yourself but also cultivated the discipline, resilience, and habits that will propel you into the next chapter of your life.

Playstation vs Xbox Dilemma

I never thought I would face a TV dilemma like Sheldon did in real life! But this dilemma is as real as it gets and is also unavoidable. It's about whether I should go for an Xbox or a PlayStation. And I don't even have a girl friend like Amy who would offer to buy me both :(.

Sheldon confused between buying an Xbox One and PS4

Why do I want to buy a console?

I've been interested in playing video games since childhood. However, due to study pressures starting from 9th grade, I began playing less. During my entire undergraduate and later master’s studies, I didn't play any games except for the occasional Clash of Clans. It was only after I got a job that I started playing again. This year, I began playing games on Steam, starting with a game called "Dying Light"—an absolutely fantastic game, by the way! Later, I found myself buying many other titles on Steam, such as Witcher Wild Hunt, RDR2, Watch Dogs, etc. This year alone, I’ve bought around 10 games on Steam, which cost me more than 5K INR combined.

After playing games on PC, I also explored new games on my phone. To my surprise, mobile games have gotten a lot better. I only knew about PUBG mobile, which I play casually with friends, but games like Genshin Impact and Diablo Immortal really impressed me. Because of these games, I bought a dedicated EvoFox Deck controller for my phone, and I’m loving it!

I enjoy completing campaigns in games and collecting as much loot as possible. To understand more about what kind of gamer I am, I also took this gamer profile test https://apps.quanticfoundry.com/surveys/answer/gamerprofile/. It was a great starting point to know what kind of games I’m most inclined to play. To my surprise, I am a "Bounty Hunter" gamer, which means I like lot of action, blowing up things, progression, etc. It absolutely makes sense. I love playing games for fun rather than being too competitive.

I heard about Xbox Game Pass from one of my colleagues. It’s a special subscription that allows playing as many games on the Xbox console as one likes. The subscription is also very affordable at 500 INR per month. So practically, I can play and try out as many games as I want for the cost of one old game on Steam. When I think about it, it’s absolutely what I need! I want to play and enjoy as many games as I like, and a Game Pass seems perfect for that. I’m not just interested in popular titles but also in famous RPG legacy games like Watch Dogs, GTA5, and RDR. Therefore, the idea of having access to a wide catalog of games is too appealing to pass up.

Consoles come with high-performance specs to enhance the gaming experience. The Xbox Series X promises to render 4K at 120 fps, which is mind-blowing to me. Currently, I play games on my Lenovo Ideapad Gaming 3, which can render 4K at 20 to 30 fps for older games like Witcher Wild Hunt. However, I also have to switch to lower settings because playing at lower fps is no fun at all. What’s more interesting about consoles is that they provide a similar gaming experience for all the games available in the store, ensuring that I won’t be compromising on game quality. This guarantee extends at least into the foreseeable future and is certainly more economical than building and maintaining a custom PC.

Another major reason for choosing a console is the controller. In the past, I hated controllers and relied solely on a keyboard and mouse for gaming. However, I’m now finding controllers to be a lot of fun. Most games have similar mapping and flawless support for a joystick, making the gaming experience very enjoyable. Additionally, my experience with the mobile deck has also fueled my liking for playing games with a controller.

So, after all this, I’ve decided to buy a console for myself. But the dilemma arises! Should I go for the PS5 Slim or the Xbox Series X? To be honest, this decision is now haunting me. This article is all about me contemplating objectively which among the PS5 or Xbox Series X is better for me.

How am I deciding which one to buy?

Where subjectivity delays, objectivity thrives. I could jump on the internet fandom for either console and become a thoughtless member of a PS5 tribe or an Xbox SX enthusiast, but I’d rather be objective about it. Both consoles have some neck-and-neck features that cannot be denied, so subjectively it's impossible to decide which one to choose. Therefore, I feel math needs to step in and objectively clarify which console would be better for me.

I will use matrix analysis for both consoles. Each parameter will have a score for each console, and the net score will be influenced by these parameters. At the end, the scores for all the parameters will be added to find the net score, and the console with the highest score will be the winner. One caveat to this technique is that the weights will be assigned subjectively, i.e., based on what I prioritize.

Parameters and Respective Weights

Parameters are essentially the factors or properties of each console. The weights represent the influence each parameter has on the overall score. For example, "Build" is a parameter that holds very little influence in my decision since I don’t care too much about build quality like design or ergonomics, except that it should be sturdy.

Here are the parameters that I’ve collected to judge both consoles:

  1. Build
  2. Console Price
  3. Performance
  4. Storage
  5. Game Library
  6. Backwards Compatibility
  7. Subscription
  8. Controller
  9. Exclusive Titles
  10. Game Availability
  11. User Reviews and Ratings
  12. Market

Based on my requirements, I have assigned weights to each of these parameters as follows:

Parameter Weight
Performance 12/12
Game Library 11/12
Backwards Compatibility 10/12
Storage Extension 9/12
Exclusive Titles 8/12
Subscription 7/12
Controller 6/12
Market 5/12
User Reviews and Ratings 4/12
Build 3/12
Price 2/12
Game Availability 1/12

Analysis

I’ll evaluate both consoles based on each parameter. The number of factors for a parameter will determine the total score in that parameter. For each console, the parameter score will be multiplied by the respective weight to get the weighted parameter score.

Performance

Specification PS5 Slim Score Xbox Series X Score
CPU 8-Core 3.5GHz 1 8-Core 3.8GHz 1
GPU 10.3 teraflops 0 12 teraflops 1
RAM 16GB DDR6 1 16GB DDR6 1
Storage 825GB SSD 0 1TB NVMe SSD 1
Frame Rate 120 fps 1 120 fps 1
Resolution 8K 1 8K 1
Total 4 6
Weighted Score 4 x (12/12) = 4 6 x (12/12) = 6

Game Library

The game library that is optimized to work well with each console.

Console No. of Games Score Weighted Score
PS5 2650 1 1 x (11/12) = 0.92
XSX 438 0 0 x (11/12) = 0.00

Backward Compatibility

Xbox is a clear winner in this parameter, as almost all the games from Xbox One and a good collection from Xbox 360 are compatible. In contrast, the PS5 supports most PS4 games but very few PS3 or PS2 games.

Console Score Weighted Score
PS5 0 0 x (10/12) = 0.00
XSX 1 1 x (10/12) = 0.83

Storage Extension

Given my newfound appetite for gaming, I will need extra storage.

Property PS5 Slim PS5 Slim Score Xbox Series X Xbox Series X Score
Cost of 1TB SSD 7771 1 17000 0
Ease of Installation DIY 0 Plug-&-Play 1
Score 1 1
Weighted Score 1 x (9/12) = 0.75 1 x (9/12) = 0.75

Exclusive Titles

PS5 is the winner for this parameter, as it has more popular exclusive titles like Spider-Man, God of War, etc. Also, GTA 6 is set to be available first on PS5 in 2025 and later on Xbox.

Console Score Weighted Score
PS5 1 1 x (8/12) = 0.67
XSX 0 0 x (8/12) = 0.00

Subscription

Both Xbox and PS provide gaming subscription services that include free game catalogs and features like cloud save.

Property Xbox Game Pass Score PS5 Plus Score
Price INR 349/month 1 INR 749/month 0
Cloud Storage No 0 Yes 1
Exclusive Content No 0 Yes 1
Catalogue Size 25 0 >100 1
Score 1 3
Weighted Score 1 x (7/12) = 0.58 3 x (7/12) = 1.75

Controller

Property PS5 Slim Score Xbox Series X Score
Third-Party Controller Support No 0 No 0
Weighted Score 0 0

Market

Market share provides insight into customer preference.

Sales Xbox Series X Score PS5 Plus Score
Total Sales 21 million 0 59.3 million 1
In 2023 7.6 million 0 21.8 million 1
Score 0 2
Weighted Score 0 x (5/12) = 0.00 2 x (5/12) = 0.83

User Ratings & Reviews

The user rating for the Xbox Series X on Amazon is 4.8, while for the PS5 it is 4.9. Hence, the PS5 is the clear winner.

Console Score Weighted Score
PS5 1 1 x (4/12) = 0.33
XSX 0 0 x (4/12) = 0.00

Build

Build quality doesn't matter to me unless the device is fragile. I don’t think either of the consoles is fragile.

Console Score Weighted Score
PS5 1 1 x (3/12) = 0.25
XSX 1 1 x (3/12) = 0.25

Price

Both consoles are similarly priced and within my budget.

Console Score Weighted Score
PS5 1 1 x (2/12) = 0.17
XSX 1 1 x (2/12) = 0.17

Game Availability

Availability of games on multiple platforms doesn’t matter to me much.

Console Score Weighted Score
PS5 0 0 x (1/12) = 0.00
XSX 1 1 x (1/12) = 0.08

Final Score

With our objective evaluation, the final scores are:

Xbox Series X = 8.64

PS5 Slim = 9.67

So, the PS5 is the winner 🎉.

Conclusion

We observe that while the Xbox Series X has an edge in performance and backward compatibility, the PS5 excels in game library, subscription, and popular game titles. Based on this analysis, it’s clear that I am going for the PS5. The only challenges I see are in DIY setup to extend storage and buying an extra Sony controller, as it does not support third-party controllers.

To GC or not to GC

Disclaimer: this article is meant for beginners and tech enthusiasts and include purely my opinions.

Programs require memory management

Computer memory is a finite resource. By memory, I am referring to the primary memory (RAM) used by the CPU to store, retrieve, and process data of programs during runtime. Typically, an average PC user will have 8 to 16 gigabytes of RAM, which is shared by all the apps running on their PC. Because we have a finite resource in terms of memory, this poses a significant challenge for programmers to effectively manage the resources, i.e., memory, without compromising the reliability and usability of the OS.

We can understand the problem of memory management with the help of an example: Let's suppose you write a program (in any programming language of your choice) that runs indefinitely to collect a bunch of numbers from the user, store them, and print them in your terminal. It would look like this to the user:

$ Enter number: 2
[OUT]: [2]
$ Enter number: 4
[OUT]: [2,4]
$ Enter number: 8
[OUT]: [2,4,8]
$ Enter number: 10
[OUT]: [2,4,8,10]
$ Enter number: 6
[OUT]: [2,4,8,10,6]

To realize this program I'll write some code in my hypothetical programming language like this:

number_store = NULL
while (true) {
    print("Enter number: ");
    x = read_number();
    if (number_store == NULL) {
        number_store = new Integer[1];
    } else {
        number_store_new = new Integer[number_store.current_size + 1];
        copy(number_store, number_store_new);
        number_store_new[number_store.current_size] = x;
        number_store = number_store_new;
    }
    print(number_store);
}

The above code is quite straightforward. Here, we allow the user to enter a number and then save that number in an array after increasing its size by one. When we create an array with the new keyword, in the background, the language reserves some memory space. Initially, memory space equivalent to size 1 is reserved, but as the user enters more numbers, more memory is reserved. However, this leads to a fundamental problem! When we create a new array and assign the old variable to this new array, what happens to the old array and the memory we allocated to it during runtime? Where does that memory go?

The answer is that it doesn't go anywhere! It still resides in the memory of our PC. As long as our program executes, the memory we reserved will be locked and unusable by any other program. This phenomenon is also known as a Memory Leak in computer science lingo.

We can solve this problem by introducing functionality to free up the memory that is no longer requires as follows:

number_store = NULL
while (true) {
    print("Enter number: ");
    x = read_number();
    if (number_store == NULL) {
        number_store = new Integer[1];
        number_store[0] = x;
    } else {
        number_store_new = new Integer[number_store.current_size + 1];
        copy(number_store, number_store_new);
        number_store_new[number_store.current_size] = x;
        delete number_store; // deallocate old memory
        number_store = number_store_new; // assign new memory to old variable
    }
    print(number_store);
}

However ingenious this solution may look, it's not new. Primitive but high-level programming languages like C and C++ already have library functions such as alloc, free, new, delete, etc., that perform this task. But manual memory management like this puts extra strain on developers to write memory management logic in addition to complex business code. Hence, there is a need for a tool or application that can run behind the scenes to manage memory with minimal to no programmer intervention.

Garbage Collectors solves memory management

To solve the problem of automatic garbage collection, the LISP programming language introduced garbage collection for the first time. Garbage collectors are programs that run implicitly during the runtime of a program and free up memory that is no longer required by the program, allowing other programs to utilize the blocks. For example, in our program above, a garbage collector would typically free up the memory space acquired by the old array once we copy the values from the old memory. Garbage collectors can run many algorithms to detect redundant memory space acquired by the program to free them. We won't be going into any such algorithms in this article.

Garbage Collectors become problem themselves

Garbage collectors solved crucial memory management issues by preventing leaks, but they add runtime overhead, reducing overall performance. In the Java programming language, each JVM instance runs its own garbage collector, which makes Java programs inefficient in terms of performance. Running GC in Java causes an increase in pause time while cleaning the memory, overhead in the heap memory, which again resides in the primary memory, increased latency, and more CPU usage. In addition to this, the efficiency of GCs also comes with the underlying running algorithms, which may cause complexity in certain use cases.

Can we do without garbage collectors?

The use of GC in programming languages is a boon for developers, but they do come at a cost. So a natural question arises: can we do without a GC? Or can we figure out a solution where the tradeoff between manually managing the memory and automatically preventing memory leaks can be effective? The answer to this question may lie in the new programming language on the horizon: Rust.

Rust is a fairly new programming language with C-like syntax that has removed both manual memory management modules as well as the Garbage Collector. Rust has implemented strict rules in syntax to prevent developers from ever occupying redundant memory space by implementing RAII (Resource Acquisition Is Initialization) through proposing the concept of ownership. In a nutshell, the concept of ownership tells us that each memory space currently occupied in the language will have an owner. If the owner of the memory dies, is removed, or is reassigned to another memory space, then that memory space is immediately deallocated. Additionally, an owner may own more than one memory space, which is completely fine, but if the owner is removed, then all the memory is also removed.

Implementing solutions like RAII again comes at a cost. Since the onus is now on the language to prevent developers from creating duplicate owners of memory space, it restricts developers from free-style and multi-paradigm coding.

Conclusion

With this, we have reached the end of our article: "To GC or not To GC." Rust has provided us with an example of how a language can operate without the implementation of GC in the language runtime, making itself fast and efficient. The trade-off on syntax strictness is also fair, as developers don't have to worry about manual memory management. However, GCs exist in almost all popular programming languages like Go, Python, Swift, etc., and aren't going anywhere. I believe that in the future, a solution must emerge that tries to solve the problem of memory management with a mix of both concepts: Garbage Collection and RAII.

Simple path to get good in DSA

Preparing to tackle Data Structures and Algorithms (DSA) effectively involves a dual approach, which I categorize as "Mathematics" and "Intuition". Much like mathematics, where one begins with learning theorems and formulas before solving problems, mastering DSA requires a solid foundation in popular data structures and algorithms followed by honing problem-solving intuition.

Part 1: Mathematics

Start by comprehensively understanding and committing to memory all essential algorithms in DSA. Progress through the following levels:

  1. Math + Array + String
  2. Hashing (Hashset, HashMap) + Stack + Queue
  3. Graph
  4. Linked List + Binary Tree + BST + AVL + RB
  5. Dynamic Programming

Acquiring a deep understanding of each topic is paramount. Without ingraining these algorithms into memory, solving problems becomes significantly more challenging. It's akin to knowing what needs to be done but struggling to articulate it. Therefore, prioritize memorizing algorithms for each topic. Here's a breakdown of algorithms for each category:

Math: - LCM - GCD - Is Prime - Sieve of Eratosthenes - Binary Exponentiation - Modular Exponentiation - Karatsuba - Fermat's little theorem - Bit arithematic - Permutation - Combination

Array: - Linear Search - Binary Search - Bubble Sort - Selection Sort - Insertion Sort - Merge Sort - Quick Sort - Counting Sort - Radix Sort - Heap Sort - Shell Sort - Bucket Sort - Cycle Sort - Two Pointer - Three Pointer

String: - All String manipulation methods in programming language of choice. - Rabin-Karp - KMP - Boyre Moore - Trie - Levenshtein Distance - Longest Common Subsequence

Hashing: - Hashset - Hashmaps - Using arrays as maps

Stacks & Queues: - Know-How

Graph: - BFS - DFS - Dijkstra's Algorithm - Bellman-Ford Algorithm - Floyd-Warshall Algorithm - Prim's Algorithm - Kruskal's Algorithm - Topological Sorting - Tarjan's Algorithm - Kosaraju's Algorithm - Bipartite Checking - Articulation Points - Eulerian Path and Circuit - Hamiltonian Path and Circuit - Minimum Cut

Linked List, Binary Tree, Binary Search Tree, AVL Tree, RB Tree: - Know-How

Dynamic Programming(Classical problems): - Fibonacci Series - Longest Common Subsequence - Longest Incresing Subsequence - Knapsack Problem - Matrix Chain Multiplication - Edit Distance - Coin Change Problem - Subset Sum Problem - Partition Equal Subset Sum - 0/1 Knapsack Problem - Maximum Subarray Sum - Rod Cutting Problem - Coin Row Problem

The list is not complete but you can get exhaustive list of algorithms anywhere on the internet.

Part 2: Intuition

Upon completing each level, immerse yourself in problem-solving. These problems demand more than just applying memorized algorithms; they require a deep understanding of the problem and the ability to adapt standard algorithms to solve specific scenarios. Consider the memorized algorithms as templates that guide your problem-solving approach.

Only tackle problems related to each level after memorizing all relevant algorithms. To access such problems, utilize platforms like LeetCode and filter by tags corresponding to each topic.

One crucial tip: Avoid fixating on the number of problems solved. While seeing the count rise can be momentarily gratifying, it can also impede progress. Instead, focus on intrinsic motivation and improvement in problem-solving intuition. Think of it as long-term fitness training; it's not about how many drills you perform but how effectively you perform in the actual game.

By adopting this comprehensive approach, you'll not only master DSA but also develop robust problem-solving skills essential for real-world applications. Remember, it's a journey of continual learning and refinement rather than a race to accumulate problem-solving streaks.

That is all one has to do to get good at DSA. Nothing more and nothing less. Getting good at DSA is not essential for doing your job i.e. software engineering but it is a benchmark over which you will be hired and you will hire others. In the beginning of our job we will find ourselves solving DSA problems for getting one and in the latter we will asking DSA questions for giving one.

MVP: An improvement over Model View Controller

Model View Presenter (MVP) is a pivotal software architecture introduced by Taligent Inc. in 1996. Emerging as an enhancement over the then prevalent Smalltalk programming model, which was built upon the foundations laid by the Model View Controller (MVC) architecture. In this article, we delve into the essence of MVP, contextualizing its significance by examining the limitations of still popular Model View Controller (MVC) application programming architecture and also developing a simple MVP app to demonstrate how MVP solves some intrinct limitations of MVC.

Some examples and concepts are directly derived from the 1996 paper[1] published by Mike Potel.

Fig.1: Model View Presenter

Model View Controller

The Model View Controller (MVC) framework stands as a widely embraced approach for crafting diverse applications, offering a clear division of the business logic into three key components: Model, View, and Controller.

The Model bears the responsibility of managing and manipulating data alongside relevant operations, ensuring robust data management within the application.

Meanwhile, the View component serves the crucial role of presenting this data to users, facilitating an intuitive and engaging user interface.

The Controller acts as the intermediary, adeptly handling requests originating from the View and orchestrating corresponding updates within the Model. Communication follows a unidirectional flow from the View to the Controller and from the Controller to the Model. However, the communication between the View and Model is bidirectional, allowing for seamless synchronization. Consequently, any alterations made in the Model reflect in the View, and vice versa.

Simple Example of a MVC app

Consider a practical example of an MVC application: a Phone Book app. In this scenario, the model serves as the structural representation of a phone book record, denoted as PhoneRecord, along with collections of such records. Beyond mere representation, the model encompasses standard methods such as getData and setData to facilitate interaction with this data.

The view component, on the other hand, assumes the responsibility of presenting this data to the user, typically in formats such as tabular records. It acts as the interface through which users interact with the application's functionalities.

In the midst of the model and view lies the controller, serving as the intermediary that maps user-generated events, such as mouse clicks or keyboard inputs like hitting the delete button, to corresponding handlers. These handlers, intrinsic to the controller, orchestrate additional operations such as creating new views to solicit user data or extracting data from existing views, culminating in the application of changes to the model.

Limitations of MVC

While MVC remains a cornerstone framework for developing applications of diverse natures, it grapples with a significant limitation: tight coupling between the controller and view. The controller often becomes excessively reliant on the view to execute its logic, leading to the inclusion of non-business-related operations within the controller's handler body.

Consider an instance within our application where a "Create new record" button triggers an event. The corresponding controller captures this event, initiates a user interface (UI) to collect data, and applies operations to the model upon user confirmation. Here, the controller's dependence on the view becomes evident as it extracts data, such as first name value, from UI components like firstName.getText(). Following data manipulation, the controller again assumes responsibility for updating the view. This intertwining of view-related logic within the controller epitomizes tight coupling.

To address this issue, it becomes imperative to disentangle view-specific logic from the controller. Tight coupling undermines the modularity and flexibility of the codebase, ultimately impeding scalability and maintainability. By mitigating this dependency, developers can foster cleaner code architecture and enhance the extensibility of the application.

Model View Presenter

To address the limitations of the MVC pattern, a separation of concerns between the view, model, and controller becomes imperative. Rather than burdening the controller with tasks beyond request handling, such as event capture and view updates, a more refined approach is warranted. Enter MVP (Model-View-Presenter) architecture, which introduces the notion of segregating the presenter component from the view, thereby adding layers of abstraction to interact with the model.

The concept of MVP architecture was pioneered by Taligent Inc., a subsidiary of IBM, in 1996 [1]. Its fundamental principle revolves around encapsulating data via selections and commands for the model while abstracting the view logic through interactors. In this paradigm, the controller in MVC retains the role of mapping view-generated data to appropriate commands for consumption. This intermediary layer, known as the presenter, assumes a pivotal role in the MVP architecture.

The presenter in MVP architecture serves atop the view, tasked solely with receiving data from the view and applying business logic by invoking suitable commands or handlers. What distinguishes this approach? Consider the scenario where the view API undergoes modifications in the future. Traditionally, such changes would necessitate adjustments within the controller. However, by decoupling the view logic from the controller, the presenter remains insulated from underlying alterations, leaving the responsibility of adapting to changes to the view itself. Consequently, modifications are confined to the view layer, sparing the presenter from unnecessary revisions.

The Presenter of the View

A prominent evolution ushered in by MVP architecture is the abstraction of view logic from the controller, achieved through an additional layer of abstraction over the view. Illustrated in our previous example of the phone book application, the view captures events like clicking on the delete button. Upon event capture, the view gathers pertinent data, such as the ID of the record in the currently selected row, formatting it into a JSON object along with other metadata before passing it to the presenter.

Within the presenter layer, the data is received from the view alongside the interactor object—in our case, the delete button press event. The presenter then leverages this data and interaction event to invoke suitable commands. Upon executing the commands and effecting changes to the model, the model informs the view of the updates, thereby triggering a refresh in the view.

In the MVP architecture, each view possesses its dedicated presenter, offering a tailored approach to handling user interactions. Furthermore, presenters can be reused across multiple views. By abstracting away the underlying intricacies of the view API and focusing solely on events and data manipulation, MVP architecture ensures that the business logic remains insulated from any alterations made to the view.

Model

When developing an MVP application, a critical initial consideration revolves around data. MVP adopts a bottom-up approach, beginning with inquiries regarding data modeling. It systematically addresses questions related to data structures before proceeding to craft suitable presenter logic for data manipulation.

The author of [1] proposes following questions to ask before creating MVP design of the app and it will start with the model.

1. What is my data?

2. How do I select my data?

3. How do I operate on my data?

Once a developer has answered this question and enough model related specifications are in place. They move towards asking question related to view.

4. How do I display my data?

5. What are my events?

6. How do I stich everything together?

Let's build the simple phone book app by asking ourselves these questions.

Part 1: PhoneBook application via MVP

To simplify the process of developing an application using the MVP approach, the creators of MVP have proposed a straightforward framework, breaking down the code into the following segments: IModel, ISelection, ICommand, IPresenter, IInteractor, and IView.

In the part 1 of the build, we'll focus on the model and address the first three questions.

1. What is my data?

The response to this inquiry can be quite straightforward. For instance, the data may consist of a list of CSV strings stored in a file, or it could represent a database containing phone records with attributes such as first name, last name, and phone number. To abstract the underlying raw data implemention we will first start by creating a runtime representation of this data. Below is an example of a Model for a phone book application.

IModel:

public interface IModel {
    public List<PhoneRecord> getAllRecords();

    public PhoneRecord getRecordWithId(String id);

    public boolean createRecord(PhoneRecord record);

    public boolean updateRecordWithId(String id, PhoneRecord update);

    public boolean deleteRecordWithId(String id);
}

PhoneRecord:

public class PhoneRecord {
    private String id;
    private String firstName;
    private String lastName;
    private String phoneNumber;

    public PhoneRecord(String row) {
        String[] extract = row.split(",");
        this.id = extract[0];
        this.firstName = extract[1];
        this.lastName = extract[2];
        this.phoneNumber = extract[3];
    }

    public PhoneRecord(String id, String firstName, String lastName, String phoneNumber) {
        super();
        this.id = id;
        this.firstName = firstName;
        this.lastName = lastName;
        this.phoneNumber = phoneNumber;
    }

    // getters and setters go here ...
}

Additionally, the model is where we can implement data persistence. However, for the sake of simplicity, we have omitted that here.

2. How do I select my data?

Now, the question arises: how do we define the data specification? This question guides us in determining how operations are applied to the dataset. For instance, users might want to delete multiple phone records at once. To accommodate this functionality, we abstract the logic of grouping multiple data records from the model. In our example, we support both single and multiple selections of data from our phone book records.

ISelection:

public interface ISelection {
    public List<PhoneRecord> getSelections(IModel model);
}

MultiPhoneRecordSelection:

public class MultiPhoneRecordSelection implements ISelection {
    private final List<String> idList;

    public MultiPhoneRecordSelection(List<String> ids) {
        this.idList = ids;
    }

    public List<String> getIdList() {
        return idList;
    }

    @Override
    public List<PhoneRecord> getSelections(IModel model) {
        List<PhoneRecord> records = new ArrayList<>();
        for (String id : idList) {
            records.add(model.getRecordWithId(id));
        }
        return records;
    }
}

SinglePhoneRecordSelection:

public class SinglePhoneRecordSelection implements ISelection {
    private String id;

    public SinglePhoneRecordSelection(String id) {
        this.id = id;
    }

    public String getId() {
        return id;
    }

    @Override
    public List<PhoneRecord> getSelections(IModel model) {
        List<PhoneRecord> list = new ArrayList<>();
        list.add(model.getRecordWithId(id));
        return list;
    }

}
3. How do I operate on my data?

By addressing this question, we can develop top-level APIs for our model. To interact with our model, we require commands that execute specific actions. These commands are abstracted in the ICommand component of the MVP architecture. They take the specific selection provided by the user and enact changes within the model. For our use cases, let's introduce two commands: NewCommand, responsible for creating a new record in the model, and DeleteCommand, which handles the deletion of records from the model.

ICommand:

public interface ICommand {
    public void execute(ISelection selection, IModel model);
}

NewCommand:

public class NewCommand implements ICommand {
    private PhoneRecord pr;

    public NewCommand(PhoneRecord newPhoneRecord) {
        this.pr = newPhoneRecord;
    }

    @Override
    public void execute(ISelection selection, IModel model) {
        model.createRecord(pr);
    }

}

DeleteCommand:

public class DeleteCommand implements ICommand {

    @Override
    public void execute(ISelection selection, IModel model) {
        if (selection == null) {
            return;
        }
        if (selection instanceof MultiPhoneRecordSelection) {
            List<PhoneRecord> listToDelete = selection.getSelections(model);
            for (PhoneRecord pr : listToDelete) {
                model.deleteRecordWithId(pr.getId());
            }
        } else if (selection instanceof SinglePhoneRecordSelection) {
            PhoneRecord pr = selection.getSelections(model).get(0);
            model.deleteRecordWithId(pr.getId());
        }
    }
}

Part 2: PhoneBook application via MVP

By delving with the questions in Part 1 we have solved the modelling problem. Now is the time to establish the view and stitch together the functionalities of view with the model with the help of presenter.

4. How do I display my data?

For demonstration purposes, we will develop a straightforward command-line application to showcase our data. This view will offer users two options: "New" and "Delete" for respective actions. The presenter, seamlessly integrated within the view, will oversee the execution of business logic.

IView:

public interface IView {
    public void displayRecords(List<PhoneRecord> records);

    public void promptNewRecordCreation();

    public void confirmRecordDeletion(List<PhoneRecord> recordsToDelete);
}

PhoneBookView:

public class PhoneBookView implements IView {
    private IPresenter presenter;

    public PhoneBookView(IPresenter presenter) {
        this.presenter = presenter;
    }

    @Override
    public void displayRecords(List<PhoneRecord> records) {
        System.out.println("Phone Records:");
        for (PhoneRecord record : records) {
            System.out.println("ID: " + record.getId() + ", Name: " + record.getFirstName() + " " + record.getLastName()
                    + ", Phone: " + record.getPhoneNumber());
        }
    }

    @Override
    public void promptNewRecordCreation() {
        Scanner scanner = new Scanner(System.in);
        System.out.println("Enter new record details:");
        System.out.print("First Name: ");
        String firstName = scanner.nextLine();
        System.out.print("Last Name: ");
        String lastName = scanner.nextLine();
        System.out.print("Phone Number: ");
        String phoneNumber = scanner.nextLine();

        PhoneRecord newRecord = new PhoneRecord(null, firstName, lastName, phoneNumber);
        presenter.createNewRecordRequested(newRecord);
        scanner.close();
    }

    @Override
    public void confirmRecordDeletion(List<PhoneRecord> recordsToDelete) {
        System.out.println("Are you sure you want to delete the following records?");
        for (PhoneRecord record : recordsToDelete) {
            System.out.println("ID: " + record.getId() + ", Name: " + record.getFirstName() + " " + record.getLastName() + ", Phone: " + record.getPhoneNumber());
        }
        System.out.println("Enter 'yes' to confirm deletion or 'no' to cancel:");
        Scanner scanner = new Scanner(System.in);
        String response = scanner.nextLine().toLowerCase();
        if (response.equals("yes")) {
            List<String> recordIds = recordsToDelete.stream().map(PhoneRecord::getId).toList();
            presenter.deleteRecordsRequested(recordIds);
        } else {
            System.out.println("Deletion canceled.");
        }
        scanner.close();
    }
}
5. What are my events?

Events are user generated entities like button pressed, mouse click, key press etc. Interactors primarily feature in UI-rich applications, where users engage with various event generation capabilities. These interactors are captured by the view. Since we're employing a command-line interface, user requests are manually entered, thus obviating the need for an in-depth discussion of interactors. Here, we'll establish a placeholder for explanatory purposes.

IInteractor:

public interface IInteractor {
    // Methods for handling user events
}
6. How do I stitch this all together

Finally, we the most crucial question of gathering all the parts together and work towards bringing our application to life. Below is the code that shows how the presenter only works on the data generated by the view and calls appropriate commands.

IPresenter:

public interface IPresenter {
    public void onViewAttached(IView view);

    public void displayRecordsRequested();

    public void createNewRecordRequested(PhoneRecord newRecord);

    public void deleteRecordsRequested(List<String> recordIds);
}

PhoneBookPresenter:

public class PhoneBookPresenter implements IPresenter {
    private IView view;
    private IModel model;

    @Override
    public void onViewAttached(IView view) {
        this.view = view;
        this.model = new PhoneRecordModel();
    }

    @Override
    public void displayRecordsRequested() {
        List<PhoneRecord> records = model.getAllRecords();
        view.displayRecords(records);
    }

    @Override
    public void createNewRecordRequested(PhoneRecord newRecord) {
        NewCommand command = new NewCommand(newRecord);
        command.execute(null, model);
        displayRecordsRequested();
    }

    @Override
    public void deleteRecordsRequested(List<String> recordIds) {
        MultiPhoneRecordSelection selection = new MultiPhoneRecordSelection(recordIds);
        DeleteCommand command = new DeleteCommand();
        command.execute(selection, model);
        view.confirmRecordDeletion(selection.getSelections(model));
    }
}

Observations

Based on the above demonstration of application development, the following observations regarding MVP become evident:

  1. In a data-intensive environment, ensuring secure and streamlined access to data is paramount. MVP offers a robust abstraction to achieve this.
  2. Unlike the typical MVC architecture where the controller often bears the brunt of handling tasks, in MVP, the presenter simply do what a controller should be doing, which is mapping events to handlers.
  3. Abstracting view API logic from the controller ensures that any future changes in the view won't disrupt the underlying business logic, enhancing maintainability.
  4. Due to its focus on handling data and events, a presenter can be reused across multiple views, promoting code reusability and scalability.

In conclusion, MVP presents a superior approach and abstraction for application development, providing maintainability.

References

[1] MVP: Model-View-Presenter The Taligent Programming Model for C++ and Java, Mike Potel, 1996

Unix, BSD, Minix, Linux - What, Who and When

I have been reading the history of Linux and Free operating systems since I started using debian in 2014. As a beginner you might be unaware about the story of UNIX and Linux. If you are like me and love linux based free operating systems, you should read this article about the interesting history that took place from 1973 to 1994 in the community that made free operating systems possible.

Before you read: This article is a super compressed version of the timeline from 1973 to 1994. I may have missed names of many co-creators and contributors who were part of this timeline, this is not by ignorance but my lack of knowledge, please mention important personality in comments if you think they had a crucial role in the development and promotion of free operating systems.

Creation of UNIX

x_86 port of Unix version 7

The original version of UNIX was written in Assembly Language but in 1973 it was re-written in C Programming language which lays the foundation of UNIX-like operating systems we use today(GNU, MacOS, Ubuntu etc.). The UNIX operating system had a very simple design, working style and was written in C which made it very popular in hacking community and computing business. Over a period of 25 years or so, programmers contributed to the UNIX family to make what it is now.

Today, UNIX is not an Operating System but a specification. The original version which was created and sold by AT&T Bell Labs has expired. It could have been the most used OS of this era but glip decisions taken by Bell Labs's legal department, prevented its organic growth and promotion.

UNIX was a very expensive product. According to Jon "Maddog" Hall, who was an early contributor in the development of Linux, a single copy of UNIX along with its source code was sold for $150,000. Such high price back then was normal for an OS software in commercial space, but it also meant that individuals could not afford to see or use its source code.

UNIX was a work of art, it invented two new features in the scope of operating systems namely time-sharing(multi tasking) and piping. These two features alone changed the way programmers used to write code, for example, with piping it became easy to reuse previously written programs instead of writing them from scratch in new routine.

BSD: Berkeley Software Destribution

Screenshot of 4.4BSDLite

In 1974, Berkley started working on their operating system called BSD based on UNIX's source code. It was an academic project at Berkeley University. Ken Thompson, who is the co-creator of UNIX also helped in the release of version 1 of BSD in 1978. Initially BSD focused on creating improved version of UNIX and was meant as an add-on to Unix-v6.

Seeing the success of BSD in academic field, the software was then distributed under the brand name of BSDi or BSD Inc. to distribute copies to people outside universities. Untill version 4.3, BSD used AT&T's UNIX source code which was subjected to properitary licenses. BSDi changed the terms in their license and released a new version named Networking Release 1 or Net/1 in 1989 which had BSD license. But even though this version had BSD license, it still used few components of UNIX propreitary source code.

BSD Developer Keith Bostic worked on releasing a non-AT&T version of BSD which had no properitary code and after 18 months of work, Net/2 was released in June 1991 which set the stage for freely distributable BSD.

After the release of BSD/386 which was based on the port of Net/2, BSDi soon found itself in legal troubles with AT&T. BSDi vs AT&T lawsuit took 2 years and hindered the development. The lawsuit was settled in 1994 largely in Berkely's favour, of 18,000 files in the software only 70 were modified.

BSD released its final version 4.4BSD-Lite in 1995 and was then dissolved. The final versions of BSD led to the creation of various releated Operating systems like FreeBSD, OpenBSD, NetBSD etc. Apple's MacOS and iOS are also based on 4.4BSD and FreeBSD.

Minix

Screenshot of Minix

John Lions wrote a book to explain UNIX source code. "John Lions' Commentary on UNIX v6" became very popular in universities, and helped students and enthusiasts learn about operating systems. But then AT&T released version 7 UNIX with a clause "You shall not teach". This made impossible for professors and teachers to teach students about UNIX and also document its source code.

In 1987, Andrew S. Tanenbaum wrote a clone of UNIX and called it MINIX(MINi-unIX). It was a fun learning project for Tanenbaum, and also a medium for students to learn operating systems. He also wrote a companion book that explained the source code of MINIX.

MINIX was available to students and professors for free in academic space only. It was not allowed to be used as a commercial software with any kind of liability. Today MINIX is available both for production and educational purposes.

Linux

X-Windows desktop running linux

Linus Torvalds joined MINIX newsgroup comp.os.minix and started discussing about various issues he was having while running MINIX on his x_86 computer. To counter the issues posed by MINIX on his computer hardware, he began working on a personal operating system project that he could run on his machine. The development of this project was done on MINIX using the GNU C Compiler.

Torvalds posted this on comp.os.minix on 25 August, 1991:

Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-) PS. Yes - it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-( — Linus Torvalds

Even though Torvalds' ambition was to create an operating system, he ended up creating a kernel software later known by the name LINUX, which then became an important component to almost all the incomplete operating systems at that time.

GNU dropped the development of their kernel and adopted Linux to create first working free operating system and called it "GNU/Linux". Of all the Debian releases, "Debian Linux release" and later "Debian GNU/Linux" became the most popular. Since Linux was free it immediately became default kernel component in all the free operating systems.

Tanenbaum-Torvalds debate

Torvalds and Tanenbaum together

As Linux started gaining attention in the community and academic space, members from MINIX and as well as Linux started moderate debate over the kernel design. Big contributors from both the projects got involved and the arguments became more and more detailed and sophesticated.

On January 1992, Tanenbaum sent a post on comp.os.minix titled "Linux is Obsolete" showering his criticism on monolithic design of Linux kernel. A day later Torvalds reponded to the post by stating he finds the microkernel(which minix have) design to be superior "from a theoretical and aesthetical" point of view.

The debate and passive feud between the kernel communities continued very long. The subject was revisited in 2006 when Tanenbaum wrote an article titled Can We Make Operating Systems Reliable and Secure?.

Epilogue

After inclusion of GPL license, adoption of linux happened at a grand scale. Today, Linux is used everywhere from NASA's Mars Rover to Tesla's auto pilot system. Linux is backed by major commercial companies like Micrsoft, IBM and VMWare through Linux Foundation.

Conclusion

The linux based OS like debian, arch, freebsd, suse etc. that we love today was not made possible just from the contributions of Linux and GNU community. It took genius of Ken Thompson and Dennis Ritchie to come up with UNIX, hacking culture of BSD to spead and improve it and MINIX to lay foundational development.

Sources

Why do we need tests!

Running a program without tests is like using a mathematical formula without proof. We “hope” program will work as expected for every input.

The process of converting pseudo code to a working program usually takes one special step at the end called testing. When we write a function we imagine parameters and expected return type. We check whether the function works by running it, if the results produced are as expected, we move on to next function. If something goes wrong we study the results, fix the problem and re-run the function until it yields a correct result.

But this process of manual re-run to check for errors often lead us to miss some cases. For example, function foo(a) gives the expected result but foo(b) does not. We will then fix the code to make foo(b) work but forget to re-run and check foo(a).

When we run functions we want expected results every time, and thus functions are designed that way, but it is impossible even for programmers to assure the correct output from a function on every execution. To tackle this issue, we use automated testing.

An automated testing is a procedure where the tests are written separately in addition to the code. These tests are executed to check the expected output of the programs. The outputs of various specs and their test cases are validated with assertions. This way, whenever programmers change the code, they can run automated testing programs to validate results.

In this article we will see how to setup testing in nodejs. I will be using Mocha which is a testing framework for both front-end and back-end javascript. And we will discuss basics of testing by writing some specs and test cases for lodash library.

Setup

I have prepared a github repo containing all the code and test files which are used in this tutorial. You should begin by cloning and installing the dependencies.

git clone https://github.com/ap4gh/testing_with_mocha.git
mv testing_with_mocha
npm install

Run the tests in terminal with following command.

npm run test

Screenshot 2019-06-12 at 5.43.37 PM.png

The final version of testing code is in the master branch, however there is a start branch which contains boilerplate setup for you to practice. npm run test script hot reload mocha as you create new tests.

Specification

A specification or spec is an entity that describe what a functionality is expected to do in various cases. Example, Let us say we want to test Math.max() function, we can write a spec in the following manner.

describe('Math.max', function() {
    it('finds maximum of two numbers', function(done) {
        assert(Math.max(1, 10) === 10);
        done();
    });
});

This snippet is entirely one spec and contain three blocks.

describe As the name suggests, it describe what functionality we are testing. It takes a String description like 'Math.max' and an anonymous function as parameters.

it This block execute the testing code and assert the output. It also takes a String description of the test case and a done callback as parameters. The description of it blocks explains what the test case it about.

assert The output of any functionality is validated by comparing it with an expected value. Assert block compares the output and returns a boolean value.

done is special callback that tells mocha to conclude a test case. We will discuss more about that later in this tutorial.

A Simple Test

All the test files are contained in a directory named test in root of the project directory. Testing frameworks look for this directory and execute every test file in it. In the cloned repo, open test/test1.js file which has following code.

const assert = require('assert');
const _ = require('lodash');

describe('Loadash Array Test 1', () => {
    let testArray = [1, 2, 3, 4, 5];

    it('finds head of the array', (done) => {
        assert.equal(_.head(testArray), 1);
        done();
    });

    it('slice first 2 elements', (done) => {
        const slicedArray = _.take(testArray, 2);
        assert.equal(slicedArray.length, 2);
        assert(slicedArray.includes(1) && slicedArray.includes(2));
        done();
    });

});

This is a very simple test spec, it has one describe block and two test cases. The describe block explains what the spec is about. Describe block also have a variable testArray on which tests are performed.

The first case is about testing _.head method. As intended, this method should return first element in an array. This return value is then compared with the expected value of 1 in testArray variable.

An assertion is done with assert.equal method. We could also have passed an expression instead of using .equal in assertion block.

assert(_.head(testArray) === 1);

This is the basis of writing tests. We create a describe block to check a functionality and create multiple test cases with it blocks. We can also implement multiple assertions inside it blocks if required.

Hooks

Hooks are user defined block of code that run outside the scope of tests. So if we want to execute a block of code that support tests but has nothing to do with the test cases directly we use hooks.

For example, it blocks after performing tests can change the original value of test variable testArray. This will lead to poor testing and incorrect assertions by other cases.

We can use a beforeEach hook that will reset the value of testArray variable before cases are executed. The hooks must be defined in the scope where they are needed. In our case we have defined a hook inside of describe block that will run before each it block.

describe('Lodash String Test 1', () => {
    let testStr = '';

    beforeEach(() => {
        testStr = 'hello    ';
    });

    it('finds last character', (done) => {
        assert(!_.endsWith(testStr, 'd'));
        done();
    });

    it('finds last character to a position', (done) => {
        assert(_.endsWith(testStr, 'l', 3));
        done();
    });

    it('trims the extra space in the end', (done) => {
        assert.equal(_.trim(testStr), 'hello');
        done();
    });
});

Hooks take an anonymous function as a parameter. In the above code a simple beforeEach is defined inside describe that will reset the value of testStr to hello before it blocks are executed.

Mocha offers three more hooks, afterEach, after and before. After and before hooks runs only one time in the defined scope. If you want to define a hook that will run before or after entire test procedure, you can put them inside a helper file. In the cloned repo you will find a helper.js file containing after and before hooks.

Nested Describe

If a functionality have sub-functionality that could utilize their own spec, you can nest them in parent spec.

describe('Objects Test Specs', () => {
    let testObj = {};

    beforeEach(() => {
        testObj['a'] = {
            key: 'a',
            value: 'A'
        }
        testObj['b'] = {
            key: 'b',
            value: 'B'
        }
    });

    describe('Lodash Object Test 1', () => {
        it('finds associated key for a value', (done) => {
            const key = _.findKey(testObj, { key: 'a', value: 'A' });
            assert.equal(key, 'a');
            done();
        });

        it('checks if key exist', (done) => {
            assert(!_.has(testObj, 'c'));
            done();
        });
    });
}

In the above code, spec Lodash Object Test 1 is nested inside of Objects Test Specs. The beforeEach hook applies to each spec before execution.

Working with Promises

Mocha execute tests asynchronously, meaning it will not wait for a case to finish before executing the next. While working with promises the program might need to wait for sometime.

To work with promises, you can utilize done callback. Lets see it with an example.

const assert = require('assert');
const wait = require('../src/wait');

describe('wait Promise', () => {
    it('returns \'completed\' after 1 sec', (done) => {
        wait.then(value => {
            assert(value === 'completed');
            done(); // testing finish here
        });
    });
});

Screenshot 2019-06-12 at 5.43.37 PM.png

In above code wait promise takes 1 second to return a value. Done is called inside of .then block to tell mocha to conclude the test. done is an important callback that must be passed after each test case is asserted.

Conclusion

Something that I did not explained fully is assert module, it is available directly via node. You can read more about it here. If you followed this article you have attained the minimum requirement of knowing how to setup and run test in javascript. Setting up tests in a javascript/node project is easy but your tests are only as good as your assertion logic. If you have any suggestion/query please comment down below or DM me on twitter.

Further reading: - Mocha - Testing on front-end

Some invaluable tips for programmers

Last week I watched some videos and read article from experts in computer science academia and software industry. I have skimmed some great programming wisdom and aggregated them in this article.

Program not Code

On a very basic level the terms coding and programming can be used interchangeably. But lets just add some philosophy behind their meanings.

When you program, your job is to craftily solve problems, handle errors gracefully, use good component names and document the code. Programming also require programmers to find ways around the limitations of a language or ecosystem and improve them.

When you code, you are simply making things work and getting constantly annoyed by the shortcomings of the language constructs. Coding is a job of applying fixes and patches and disregarding the complexity.

Big Interfaces make weaker Abstractions

A class/object with too many methods make program bloated. Multiple methods for variations of same job is unnecessary. For example, when you create a class that converts a given decimal number to hexadecimal, binary, octal etc. you can write methods for every conversion or just one that accept a base value parameter and perform conversion for that base number. In later case, the interface is small but will lead to more implementations(usefulness) and lesser complexity.

// too many interfaces
convert.toBin(10);
convert.toHex(10);
convert.toOct(10);

// just one interface
convert.to(10, 2);
convert.to(10, 16);
convert.to(10, 8);

A little copying is better than a dependency

Programmers move on from their code, its a proven universal fact. When the code base of a dependent package is not maintained regularly it increases the possibility of dependency hell. Sometimes our code does not even utilize a dependency completely. A clever move is to always copy some code to make up for the missing functionality instead of adding an extra node in dependency tree.

Errors are good, treat them like friends

Learning to program is more about learning to make mistake and then correcting them. A naive move that most of us make is to ignore errors. Generally when program throw errors, we fix them and move on. What should be done is to reinforce the occurrence of errors with helpful messages and documentations. Introduce errors to users as you would introduce a friend to your family. A simple if (err) print(err) should not be the goto practice.

Use good Component Names and Comment the code

Naming is the single most underrated thing while programming. Names should describe the purpose of code. A good name can be sufficient to convey if the named entity was an object, variable, function etc. Naming itself can suffice documentation needs for small programs, but description of code in comments will make source code a heaven for future developers.

Use a style guide for code formatting

Lets get real, we don't like style guides, but they are awesome because they exist. Programmers talk through their code, and a style guide provides convention. Even if you hate them, you should care about future programmers and implement one.

Use a suited Programming Language

A programming language comes with its own ecosystem, which means you can include previously written code in your own, incorporate supported tools and get help from community. Any turing complete language can solve solvable computation problems, hence one language can indeed do the job of other. So, it is not a question of what a language can or cannot do, but how hard it can be to implement for a certain job? For example, when it comes to rapid debugging and human-friendly syntax, python wins but at the same time it is not suitable for game programming. Similarly C++ is a powerful OOP language for writing games but hard to use for front end web apps.

Orders of Ignorance

Our field is one that requires constant touch with on going research and experiments with latest tools and libraries. We can not only enjoy learning new things everyday but also get rewarded for it.

Learning software development is "knowledge acquisition activity". To acquire knowledge, we should know five orders of Ignorance as proposed by Phillip G. Armour.

0th Order Ignorance - Lack of Ignorance, I know how to do something and can display by lack of ignorance with some sort of output.

1st Order Ignorance - Lack of Knowledge, I don't know something but I know that I don't know how to do it and I know what I need to learn in order to be able to do it.

2nd Order Ignorance - Lack of Awareness, I don't know that I don't know something.

3rd Order Ignorance - Lack of Process, I don't know a suitable efficient way to find out I don't know that I don't know something.

4th Order Ignorance - Meta Ignorance, I don't know about the five orders of ignorance.

Once we know our orders of ignorance it becomes easy to work on them. For example, while building a software system 2rd and 3rd order of ignorance are most dominant. Knowledge of these orders also convey how serious can it be for us to solve a problem with programming.

Conclusion

This month I started to learn programming(again) by solving Brian Kernighan's C Programming Book, and I have found some alarming deficiency in my programming skills. I have been coding for past two and a half year. I have had always felt that something was missing in way I program. When my code fail I always look for "help" instead of a "solution". At first I used to think maybe I was lacking proper planning skills, but after through introspection I found that the problems were in my habits. I have learnt a lot by these tips. I hope it would be of help to you some day.

References

What Go Programming Language does and does not have

go-get-em

Go has the benefit of hindsight, and basics are well done: it has a garbage collection, a package system, first class functions, lexical scope, a system call interface and immutable strings in which text are generally encoded in UTF-8. But it has comparatively few features and is unlikely to add more. For instance, it has no implicit numeric conversions, no constructors or destructors, no operator overloading, no default parameter values, no inheritance, no generics, no exceptions, no macros, no function annotations and no thread-local storage.

Before you read: This above passage is from the book “The Go Programming Language” by Alan A. A. Donovan and Brian W. Kernighan. page xiv . The points mentioned below are a brief and somewhat incomplete explanation of terms used in programming language design. I have tried to explain all the concepts from the angle of Go programming. All the points mentioned below are not mine, they are taken from the passage. I am in no way advocating for Go or any other language.

We will now try to understand each term in brief. As a beginner in core programming, having knowledge of these terms are important. The meaning of all these concepts apply in every programming language. These terms can help you distinguish various languages on a fundamental level.

✅ Things that Go has

Garbage Collection

It is an entity in any programming language that does automatic memory management. To understand garbage collection or memory management, first you need to understand how memory works. While working with a programming language, compiler assign various memory location in the system to store data e.g. creating a variable, looping over an array etc. The allocation and de-allocation of memory needs to be done in order to make program more efficient with memory.

In language like C, memory management is done manually, if you are familiar with C you know there is a function called malloc that dynamically allocate memory in the system. In a high level language like JavaScript or Python, these allocation are done automatically by program known as Garbage Collector. As the name suggest, their job is to manage memory, assign locations when needed and delete memory allocations when not. Go has garbage collection, so programmer do not have to worry about managing memory and space.

Package System

Packaging of a software is bundling up of all the source code and assets into one entity called as package. A software package is handy in many ways like easy installation, sharing, contributing, debugging etc. Go has a built in package system that bundle up documentations, binaries and source code files. The purpose of packaging is to be able to use other software projects in your software without having to manually copy over the source code.

First-class Functions

A First class function is a function that can be treated like any other variable i.e. it can be assigned, returned, exported, passed as a parameter etc. Take a look at following snippet below written in Go. A function that prints a string hello world first class function is assigned to a variable a. The variable a acts as an actual value in the memory however it can also be called as a function by appending () at the end of it. You can also see that value of variable a is printed just like any other variable. This is a basic concept of first class functions.

package main

import (  
    "fmt"
)

func main() {  
    a := func() {
        fmt.Println("hello world first class function")
    }
    a()
    fmt.Printf("%T", a)
}

Lexical Scope

Scope in a program is like a block or area over which the definition of any variable/function is defined. For example, a variable declared inside of a function only has its meaning inside that function block i.e. between the curly braces { }. If you try to access the value of such variable outside this function block, program will not be able to find it. This is a basic way to understand Lexical Scope, it is more of a method of scoping than the scope itself.

package main

import fmt

func main() {

    {
        v := 1
        {
            fmt.Println(v)
        }
        fmt.Println(v)
    }

    fmt.Println(v)
    // “undefined: v” compilation error

}

In the above snippet, there are four scopes. One: the universal scope, two: function main(), three: first block inside function main and four: scope where fmt.Println is called for the first time. Out of three Println, the last one gives a compilation error. This is because the definition of variable v is only available in scope three and four. When Println is called with the v passed in as a parameter, the program first look for its definition in the current scope, when it fails to find it, it moves outwards in parent’s scope and it will keep doing that until it finds the definition of it. This is what lexical scoping does, program start to look for definition of variables and functions from the scope in which they are used/called and move inside out. In the last fmt.Println program is not able to find definition of v in current or any parent scopes hence gives a compilation error.

System call interface

Go is provided with system call interface, which serves as the link to system calls made available by the operating system. For example, opening and reading a file, input and output etc. It intercepts function calls in the API and invokes the necessary system call within the operating system.

Immutable Strings

Although Go syntax have similarity and simplicity of C, it has an improvement over it with immutable strings that are encoded in UTF-8. So programs written in Go can also utilize forming strings in multiple languages and symbols. In primitive sense strings are combination/array/list of characters in programming languages. As the strings are formed by combining characters, their composition can be changed. Characters can be appended, removed, moved etc. It is a way by which when strings are declared their composition cannot be changed(mutated). The concept of immutable strings are not new, In Python String object instances cannot be mutated, JavaScript too have immutable strings and Ruby added Frozen String Literals in 2.3. But still, a great many popular languages like C++, PHP, Perl etc. do not have immutable strings.

❌ Things that Go does not have

Implicit numeric conversion

In programming type conversion refers to changing of data type of an entity to another. An implicit conversion means that this change takes place automatically by interpreter or compiler. For example, assigning an int value to a variable that was previously assigned to a float value. Such conversion is not available in Go. When the type is not mentioned while declaring a variable, it is assigned a suitable type like int, float, string etc. based on the syntactical composition of the literal. In the example given below, Go will thrown an error because it find two different data types and cannot perform operation on them. This occurs as Go compiler does not implicitly converts int to float64 .

a := 1.0 // same as float64(1.0)
b := 1 // same as int(1)

fmt.Printf("%f", a*b) 
// invalid operation: a * b (mismatched types float64 and int)

Constructors and Destructors

A job of constructors is to head start and initialize an object, whereas destructor is to destroy the object after its life time and free up memory. Unlike other object oriented programming, Go does not have classes. Hence the concept of constructors and destructors does not exist.

Operator Overloading

Operator overloading is a way in which the operators can be used to perform operations as defined by users. Operators behave according to the arguments passed. For example in C++ + operator can be used for string concatenation as well as addition of two integers. The meaning of + can also be defined by the user and changed according to program needs. In JavaScript and operation like '1' + 1 would result in a string output of "11" due to higher precedence of strings. Such definitions are not allowed in Go, operators work strictly and only perform operations on specific argument data types.

Default Parameter Values

Go does not allow default values in function prototypes or function overloading. The Go language specification is remarkably small, and is purposefully maintained that way to keep the parser simple. Unlike other languages where you could pass default/optional parameter values in function, in Go you can only check if value was passed. A different approach to default values in Go will be something like this.

func Concat1(a string, b int) string {

  if a == "" {
    a = "default-a"
  }
  if b == 0 {
    b = 5
  }

  return fmt.Sprintf("%s%d", a, b)
}

Inheritance

Since Go does not follow conventional class hierarchy of objected oriented programming, structures in Go are not inherited from one another. In general, inheritance is a procedure in OOP languages in which one class inherits properties and method of its parents’ class. Inheritance can go deep into multiple levels. In Go however, a structure can be composed simply by providing a pointer or embedding to the collaborating structures. An example composition in Go is given below. A replacement to classes can be interfaces in Go. Interfaces do exist in other languages, however Go’s interfaces are satisfied implicitly.

type TokenType uint16

type Token struct {
  Type TokenType
  Data string
}

type IntegerConstant struct {
  Token *Token
  Value uint64
}

Generic Programming

Generic programming is a form in which we include templates known as generics which actually are not true source code but it is compiled by the compiler to transform them into source code. Lets try to understand templates in a simple way. Think of templates in programming as a form. We create a form where the crucial details of a template are left blank and is to be filled later during compilation. Then when we need to create something out of that template we just specify the details, for example type.

template<typename T>
class MyContainer
{
    // Container that deals with an arbitrary type T
};

void main() 
{
    // Make MyContainer take just ints.
    MyContainer<int> intContainer;
}

In above snippet written in C++. The template is not provided a type, but it is provided one when MyContainer is initialized. We can also specify other types like float, double etc. according to the needs. Generics like templates are useful during running algorithms over set of data having multiple data types.

Exceptions

An exception indicates a condition which is reasonable and application might want to catch. Through exceptions we can resolve conditions for which program might fail to run. A checked exception does not bring execution to a complete stop, it can be caught and dealt with. Go does not have exceptions, it only has errors as interfaces and built-in errors. A crucial distinction among errors from exceptions are that they indicate a serious problem and which needs to be dealt with immediately, hence programming in Go become stricter. Errors in Go needs to be checked explicitly as they occur.

Macros

Macros stand for macro instructions. It is a way of minimizing repetitive task in programming by defining a preset output to a given set of inputs. For example, if we want a square of a number in C we can just do x * x where x is the variable, but we can also define a macro that return square of a number every time we need it. Macros are not functions. Macros are not available in Go.

#define square(x) ((x) * (x))

int main() {
    int four = square(2);  // same as 2 * 2
    return 0;
}

Function Annotations

Annotations are a way of associating metadata to function parameters. In Python annotations are syntactically supported and are totally optional. Lets take a small example to describe what annotation are in Python.

def foo(a: int, b: 'description', c: float) -> float: print(a+b+c)

foo(1, 3, 2)  // prints 6.0
foo('Hello ', , 'World!')  // prints Hello World!

In above code, the parameters a, b and c are all annotated with some metadata. a and c are annotated with int and float types whereas b is provided with a string description. foo will print specific output despite the type of arguments mentioned in annotations.

Thread-local Storage

Thread-local storage is a computer programming method that uses static or global memory local to a thread. It is a static area where data gets copied for each thread in a program. When multiple thread utilize same static data for same task, they can copy it from TLS rather than storing it on their own.

Conclusion

The creation of Go was focused on simplicity and elegance. It is fast, small and have simple syntax. There are less concepts to wrap your head around unlike other OOP languages. The creators of Go have solved simplicity of a language by not adding multiplicative complexity to adjacent parts of language. Hence Go does not have any feature that makes parser slower and bigger. Simplicity is the key to a good software.

NOTE: Code snippets in this article are copied from various articles on web.