Apex Programming Best Practices for Developers

12 min read

Apex_Programming_Best_Practices_for_Developers

Let's get something straight. Anyone can learn the syntax of Apex and make something *work*. You can stitch together some code, deploy it, and for a little while, everything seems fine. But that's not the job, is it? The real job, the one that separates a junior coder from a senior developer, is writing code that doesn't just work today. It's about writing code that works a year from now, that your colleagues can understand, and that doesn't bring your entire Salesforce org to a screeching halt when your data volume triples.

I've been in the Salesforce trenches for a long time. I've seen the good, the bad, and the truly horrifying code that people have pushed to production. The biggest mistake I see people make is thinking that "fast" is the same as "good." They rush to a solution without thinking about the consequences. Today, I'm going to share some of the hard-won lessons I've learned. This isn't textbook theory. This is practical, real-world advice for professional Apex Programming that will make your code better, your org more stable, and your life easier. You ready?

It's Not Just About Making It Work; It's About Making It Last

Think of your code as a building. You can slap together a shack in a day, and it might keep the rain out tonight. But what happens when a storm hits? Or when you need to add another room? It falls apart. Good code is like a building with a solid foundation. It's built with purpose, designed for future expansion, and can withstand stress.

This is the core of what we're talking about: writing code that is scalable, readable, and maintainable. Scalable means it performs just as well with ten records as it does with ten thousand. Readable means another developer (or you, six months from now) can look at it and understand what's happening without needing a four-hour meeting. Maintainable means you can fix bugs or add features without breaking ten other things. This isn't just a "nice to have." It's a professional responsibility.

The Cardinal Rules of Apex You Can't Ignore

If you take nothing else away from this, burn these next points into your brain. Ignoring them is the fastest way to create technical debt and performance nightmares. I've seen it happen more times than I can count.

Bulkification is Non-Negotiable

This is Apex 101, yet it's the most common and destructive mistake. You must never, ever, place a SOQL query or a DML statement (insert, update, delete) inside a loop. Ever.

Why? Because Apex runs in a multi-tenant environment. Salesforce has to protect its resources, so it imposes governor limits. One of those limits is the number of SOQL queries (100) and DML statements (150) you can have in a single transaction. If your trigger processes a data load of 200 records, and you have one SOQL query inside your `for` loop, you've just tried to run 200 queries. The result? Your transaction fails. Miserably.

I remember a project early in my career where we had a trigger on the Opportunity object. It worked perfectly in the sandbox with our handful of test records. The day we deployed, the sales team did a massive data import. The entire org ground to a halt. Phones started ringing. It was a nightmare. The culprit? A single, innocent-looking SOQL query inside a `for` loop. We had to roll back the deployment and fix it under pressure. It was a lesson I never forgot.

Here's what not to do:

// BAD CODE - DO NOT USEtrigger AccountTrigger on Account (before update) {    for (Account acc : Trigger.new) {        // SOQL query inside a loop. This is a disaster waiting to happen.        Contact mainContact = [SELECT Id, Phone FROM Contact WHERE AccountId = :acc.Id AND IsPrimary__c = true LIMIT 1];        acc.BillingPhone__c = mainContact.Phone;    }}

And here's how you fix it. You collect all the IDs first, run one query, and then process the results.

// GOOD CODE - BULKIFIEDtrigger AccountTrigger on Account (before update) {    Map<Id, Contact> primaryContactsMap = new Map<Id, Contact>();        // 1. Collect all the Account IDs from the trigger context.    Set<Id> accountIds = Trigger.newMap.keySet();        // 2. Run ONE query to get all the related contacts.    for (Contact con : [SELECT Id, Phone, AccountId FROM Contact WHERE AccountId IN :accountIds AND IsPrimary__c = true]) {        primaryContactsMap.put(con.AccountId, con);    }        // 3. Loop through the trigger records and use the map to process the logic.    for (Account acc : Trigger.new) {        if (primaryContactsMap.containsKey(acc.Id)) {            acc.BillingPhone__c = primaryContactsMap.get(acc.Id).Phone;        }    }}

See the difference? It's a simple pattern: Collect IDs, query once, process with a map. This is the heart of bulkification.

Write One Trigger Per Object

This is a point of contention for some, but I believe it's the only sane approach. When you have multiple triggers on the same object (e.g., three different triggers on Account), you can't control their order of execution. Salesforce makes no guarantee about which one will run first. This leads to unpredictable behavior and bugs that are almost impossible to track down.

Instead, create a single trigger for each object. This trigger's only job is to be a dispatcher. It looks at the trigger context (`isBefore`, `isInsert`, `isUpdate`, etc.) and calls the appropriate methods from a separate handler class. This centralizes your logic, makes the execution order explicit, and keeps your code organized.

Keep Your Logic Out of Triggers

This goes hand-in-hand with the "one trigger per object" rule. Your `.trigger` file should be lean. It shouldn't contain any business logic. All the heavy lifting—the complex calculations, the data manipulation, the decision-making—should live in a separate Apex class, often called a "handler" or "service" class.

Why does this matter?

  • Reusability: You can call methods in your handler class from other places, like a batch job or a controller for Lightning Web Components (LWC), without having to copy-paste code.
  • Testability: It's much easier to write a unit test for a specific method in a class than it is to test a complex trigger that does ten different things. You can test each piece of logic in isolation.
  • Clarity: The trigger file tells you *when* code runs. The handler class tells you *what* that code does. It's a clean separation of concerns.

Thinking Beyond the Code: Governor Limits and Performance

Good Apex Programming isn't just about syntax; it's about working efficiently within the Salesforce platform's constraints. You have to be mindful of governor limits beyond just SOQL and DML.

SOQL Queries are a Precious Resource

We've already established you shouldn't put queries in a loop. But the quality of your queries matters, too. You need to write *selective* queries. A selective query is one that uses an indexed field in the `WHERE` clause. Standard fields like Id, Name, and OwnerId are indexed automatically. You can also ask Salesforce to create a custom index on an external ID field or certain other custom fields.

Using an indexed field allows the Salesforce query optimizer to find the records it needs quickly, without scanning the entire database table. A non-selective query, especially in an org with millions of records, can be slow and may even time out. Avoid using leading wildcards (`LIKE '%value'`) on non-indexed text fields, as this almost always results in a full table scan.

Asynchronous Apex is Your Best Friend

Sometimes, a process is just too big to run in a single, synchronous transaction. Maybe you need to process thousands of records, or you need to make a callout to an external system. This is where asynchronous Apex comes in. It lets you run jobs in the background, with higher governor limits and more processing time.

You have a few tools at your disposal:

  • @future Methods: These are simple, "fire-and-forget" methods. They're great for quick, isolated tasks like making a callout to an external API as part of a Salesforce Integration. The downside is you can't chain them together easily.
  • Queueable Apex: This is the modern successor to `@future`. It's more flexible. A queueable job is an object, so you can hold state. Most importantly, you can chain one job to another, which is perfect for sequential processes.
  • Batch Apex: This is the workhorse for processing massive data volumes. It breaks your record set into manageable chunks and processes each chunk in a separate transaction. If you need to update every Contact in your org, Batch Apex is the tool for the job.

Knowing when to use each is key. Don't use Batch Apex for a simple callout, and don't try to process 50,000 records with a `@future` method. Use the right tool for the job.

Understanding CPU Time Limits

This is a governor limit that trips up even experienced developers. You only get a certain amount of CPU processing time for each transaction (e.g., 10,000 milliseconds). It's not just about database operations. Inefficient code, like nested loops that perform complex calculations or string manipulations on a large dataset, can eat up CPU time and cause your transaction to fail. Always be thinking: "What's the most efficient way to do this? Can I reduce the number of loop iterations? Can I use a Map to avoid nested loops?"

Modern Apex: Connecting to the Wider Salesforce Ecosystem

Your Apex code doesn't live in a vacuum. It's the engine that powers user interfaces and interacts with the latest platform features. Being a great Apex developer means understanding how your code fits into the bigger picture.

Apex for Lightning Web Components (LWC)

With Lightning Web Components (LWC) being the standard for UI development on the platform, your Apex skills are more important than ever. Your LWCs will need to fetch data, save data, and execute business logic. That's all done through Apex methods.

The key here is to be efficient. When an LWC calls an Apex method to get data, use the `@AuraEnabled(cacheable=true)` annotation whenever possible. This tells the Lightning Data Service that the data can be cached on the client side, which makes your application feel faster and reduces server roundtrips. Also, make sure your Apex methods return only the data the component needs. Don't send back an entire sObject with 50 fields if the component only displays three of them. Create a lightweight wrapper class or use a map to return a lean data structure.

Preparing for the Future: Data Cloud and AI

The Salesforce platform is constantly evolving. Two of the biggest areas of innovation right now are Data Cloud and Salesforce Einstein AI. You might think these are separate from your day-to-day Apex work, but they're not. A solid foundation in Apex Programming is what prepares you to work with these advanced tools.

For example, you might use Apex to trigger data actions or ingest data into the Data Cloud. More importantly, the quality of your data is paramount for any AI initiative. Salesforce Einstein AI relies on clean, well-structured data to make accurate predictions and recommendations. Your Apex triggers and validation rules are the front line of defense for data quality. By writing robust code that ensures data integrity, you're directly enabling your organization to get real value from AI.

The Unwritten Rule: Write Code for Humans

Finally, let's talk about the most overlooked aspect of coding. Your code will be read by other people. It might even be read by you, a year from now, when you have no memory of writing it. Make it easy for them.

Naming Conventions Matter

Don't use cryptic variable names like `x` or `myList`. Be descriptive. A variable named `accountsToUpdate` is much clearer than `accList`. A method named `calculateShippingCosts()` tells you exactly what it does. This seems trivial, but it makes a huge difference in readability.

Commenting: The Why, Not the What

Good code should be self-documenting. I shouldn't need a comment to tell me that `if (account.NumberOfEmployees > 100)` is checking if the number of employees is greater than 100. The code itself tells me that. Where comments are invaluable is in explaining the *why*. Why does the business rule require this check? Is there a strange edge case this code is handling? A good comment gives context that the code can't.

Don't Forget Your Test Classes

Writing test classes isn't just a chore you have to do to meet the 75% code coverage requirement. Test classes are your safety net. They prove that your code works as expected, and they prevent future changes from breaking existing functionality (regression). A good test class doesn't just run the code; it uses `System.assert()` to verify that the results are correct. It tests bulk scenarios, positive outcomes, and negative outcomes. Investing time in good tests will save you hours of debugging later.

Conclusion

Becoming an expert in Apex Programming is a journey, not a destination. It's about more than just learning syntax. It's about adopting a professional mindset focused on quality, scalability, and maintainability. It’s about understanding the platform's architecture and working with it, not against it. The principles we've discussed—bulkification, trigger frameworks, asynchronous processing, and writing clean, human-readable code—are the foundation of that mindset.

Don't just write code that works today. Write code that you'll be proud of a year from now. Write code that helps your team, protects your org, and builds a stable foundation for whatever comes next, whether it's a new Salesforce Integration or a complex AI model. That's the mark of a true professional.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enjoy our content? Keep in touch for more