maintenance interview questions
Top maintenance frequently asked interview questions
So we have this huge (is 11000 lines huge?) mainmodule.cpp source file in our project and every time I have to touch it I cringe.
As this file is so central and large, it keeps accumulating more and more code and I can't think of a good way to make it actually start to shrink.
The file is used and actively changed in several (> 10) maintenance versions of our product and so it is really hard to refactor it. If I were to "simply" split it up, say for a start, into 3 files, then merging back changes from maintenance versions will become a nightmare. And also if you split up a file with such a long and rich history, tracking and checking old changes in the SCC
history suddenly becomes a lot harder.
The file basically contains the "main class" (main internal work dispatching and coordination) of our program, so every time a feature is added, it also affects this file and every time it grows. :-(
What would you do in this situation? Any ideas on how to move new features to a separate source file without messing up the SCC
workflow?
(Note on the tools: We use C++ with Visual Studio
; We use AccuRev
as SCC
but I think the type of SCC
doesn't really matter here; We use Araxis Merge
to do actual comparison and merging of files)
Source: (StackOverflow)
I'm trying to explain the ratio of development versus maintenance costs to our sales department, and currently I have mostly my gut feeling that we spend about 60% of the time with maintenance.
We have some persons on the team who tends to sell custom solutions, that we have to build, and if the sales people doesn't understand the total cost of development, then they will not be able to sell for realistic prices.
Another "problem" is that we are expanding our service, and have a need to refactor some of the underlying infrastructure in order to reduce time to market and other measure points.
Do you have any good suggestions on what I should refer to in order to build a solid argument? And what points should I bring up in order to give them a good understanding of the problem?
Maybe there is some great text out there somewhere that I can point to.
Source: (StackOverflow)
My department is currently faced with the responsibility for the task of maintaining a rather large COBOL code base. We are wondering how to add new features to keep up with business needs. COBOL programmers are hard to come by these days, and we also think we would get higher productivity from using a more modern language like Java or C#.
We feel that we have four options:
- Rewrite everything from scratch, leaving the old application to itself until it is ready to be replaced
- Rewrite everything from scratch, getting some people to maintain the old application to cope with new business needs as the new one is being built
- Write all new functionality in a modern language and finding some way to integrate the new code with the old functionality.
- Keep maintaining the old application.
What do you consider the best option for us, and why?
Source: (StackOverflow)
Recently I read a blog post saying that it is a good practice to develop Perl applications just as you would develop a CPAN module. (Here it is – thanks David!) One of the reasons given was that you could simply run cpan .
in the project dir to install all the dependencies. This sounds reasonable, and I also like the “uniform interface” that you get. When you come across such an application, you know what the makefile does etc. What are other advantages and disadvantages to this approach?
Update: Thanks for the answers. I’ve got one more question about the dependency installing, I’ll post it separately.
Source: (StackOverflow)
How long should you keep old code commented out in your code base? The contractors continue to keep old code in code base by turning it into comments. This is really frustrating and I want them to just remove the old code instead of commenting it out.
Is there a valid reason to keep old code in the code base as comments? I am using Version control by Visual Sourcesafe
Source: (StackOverflow)
I am about to upgrade my whole site. I am looking for advices on how tech startups do their maintenance process.
I am thinking of using htaccess. However that only redirect users accessing for example, index.php to maintenance.php right? If user access dashboard.php, there will be no redirection.
During the maintenance, i do need to access index.php (ONLY ME). Anyone who worked in tech startups care to share their solution? Appreciated it alot.
Source: (StackOverflow)
How do I find dead code in a Visual Studio 2008 C# project?
Like unused classes, unused variables or unused resources?
Source: (StackOverflow)
Does anyone has some tool or some recommended practice how to find a piece of code which is similar to some other code?
Often I write a function or a code fragment and I remember I have already written something like that before, and I would like to reuse previous implementation, however using plain text search does not reveal anything, as I did not use the variable names which would be exactly the same.
Having similar code fragments leads to unnecessary code duplication, however with a large code base it is impossible to keep all code in memory. Are there any tools which would perform some analysis of the code and marked fragments or functions which are "similar" in terms of functionality?
Consider following examples:
float xDistance = 0, zDistance = 0;
if (camPos.X()<xgMin) xDistance = xgMin-camPos.X();
if (camPos.X()>xgMax) xDistance = camPos.X()-xgMax;
if (camPos.Z()<zgMin) zDistance = zgMin-camPos.Z();
if (camPos.Z()>zgMax) zDistance = camPos.Z()-zgMax;
float dist = sqrt(xDistance*xDistance+zDistance*zDistance);
and
float distX = 0, distZ = 0;
if (cPos.X()<xgMin) distX = xgMin-cPos.X();
if (cPos.X()>xgMax) distX = cPos.X()-xgMax;
if (cPos.Z()<zgMin) distZ = zgMin-cPos.Z();
if (cPos.Z()>zgMax) distZ = cPos.Z()-zgMax;
float dist = sqrt(distX*distX +distZ*distZ);
It seems to me this has been already asked and answered several times:
What tool to find code duplicates in C# projects?
How to detect code duplication during development?
I suggest closing as duplicate here.
Actually I think it is a more general search problem, like: How do I search if the question was already asked on StackOverflow?
Source: (StackOverflow)
This may be a subjective question leading to deletion but I would really like some feedback.
Recently, I moved to another very large enterprise project where I work as a developer. I was aghast to find most classes in the project are anywhere from 8K to 50K lines long with methods that are 1K to 8K lines long. It's mostly business logic dealing with DB tables and data management, full of conditional statements to handle the use cases.
Are classes this large common in large enterprise systems? I realize without looking at the code it's hard to make a determination, but have you ever worked on a system with classes this large?
Source: (StackOverflow)
Say if it's a 10 people project, 2-3 of the original programmer quit after the project has been release a stable version for a while. How to have the code maintainable in this case?
My imagination is reviewing the code after the project goes to release version and keep review it afterwards? Maybe split into 2-3 small groups and have each group review part of the code. So at least 3-4 people are familiar with part of code. Does this work? How do companies deal with this issue?
Usually how many percentage of time spent on reviewing the code? Please advise, thanks to community.
Source: (StackOverflow)
I've been working at my university this summer in an image/video lab. Just recently, my professor gave me a program written by a grad student who just left the program to "fix up", because it was "giving some errors."
The project was written in C++ (seems to be a recurring bad sign in student code). I opened the project in VS08, and ran the project, and turns out, the "errors" was a bad_alloc. Sure enough, the memory management, or more precisely, the lack of it, was the problem.
The programmer seemed to like mingling mallocs, news and new[]s throughout the entire code, with absolutely no free, delete or delete[]. To make it worse, all the objects seem to do atleast 4-5 unrelated things. And to top it off, here's a comment left by the programmer:
//do not delete objects, it seems to cause bugs in the segmenter
From what I can see, there's a nice unhealthy mix of reference of pointers and references, and all values are changed by passing by reference to the monolithic class functions that may as well be static. At compile time, there were around 23 warnings---stuff like possible loss of data when converting from double to char, around 17 unused variables etc. It's times like this that I wish C++ never existed in universities, and that all lab work was done in like python or matlab...
So now, the professor wants me to "fiddle" with the program so it can run on datasets around 10 times larger than what it was used to. I admit, I'm a bit afraid of telling her the code is garbage.
StackOverflow, you guys have never failed before with giving good advice, so now I plead, any advice on dealing with situations like this would be MUCH appreciated.
EDIT
The code is around 5000 LoC
EDIT2
The professor decided to go with the easiest approach. Which was getting more RAM. Yay for being about to throw money at the problem...
Source: (StackOverflow)
It's a trivial task to find out if an object is referenced by something else or not. What I'd like to do is identify whether or not it's actually being used.
My solution originally involved a combination of a table that held a list of objects in the database and an hourly job.
The job did two things. First, it looked for new objects that had been added to the database since the last run. Secondly, it looked at sql's object cache. If an object in the table was listed in the cache, it was marked off in the table as having been recently "seen" in use.
At the end of a six month period or whatever, the contents of the table were examined. Anything listed in the table that hadn't been seen referenced since I started monitoring were probably safe to backup and remove.
Sure, there is the possibility of objects that are only used, say, once a year or whatever, but it seemed to work for the most part.
It was kind of a pain to work with, though.
There are about a half dozen databases I'm working with, the majority of which have tons of legacy tables on them, which remain long after their original creators moved on to other companies.
What I'm looking for is a fairly reliable method of keeping track of when an object (table, view, stored procedure, or function) is getting called.
For those of you who currently monitor this sort of thing, what method/code do you use and would you recommend it?
Source: (StackOverflow)
I need to copy some records from our SQLServer 2005 test server to our live server. It's a flat lookup table, so no foreign keys or other referential integrity to worry about.
I could key-in the records again on the live server, but this is tiresome. I could export the test server records and table data in its entirety into an SQL script and run that, but I don't want to overwrite the records present on the live system, only add to them.
How can I select just the records I want and get them transferred or otherwise into the live server? We don't have Sharepoint, which I understand would allow me to copy them directly between the two instances.
Source: (StackOverflow)
I'm interested in maintaining a Maven 2 repository for my organization. What are the some of the pointers and pitfalls that would help.
What are guidelines for users to follow when setting up standards for downloading from or publishing their own artifacts to the repository when releasing their code? What kinds of governance/rules do you have in place for this type of thing? What do you include about it in your developer's guide/documentation?
UPDATE: We've stood up Nexus and have been very happy with it - followed most of Sal's guidelines and haven't had any trouble. In addition, we've restricted deploy access and automated build/deployment of snapshot artifacts through a Hudson CI server. Hudson can analyze all of the upstream/downstream project dependencies, so if a compilation problem, test failure, or some other violation causes the build to break, no deployment will occur. Be weary of doing snapshot deployments in Maven2/Maven3, as the metadata has changed between the two versions. The "Hudson only" snapshot deployment strategy will mitigate this. We do not use the Release Plugin, but have written some plumbing around the Versions plugin when going to move a snapshot to release. We also use m2eclipse and it seems to work very well with Nexus, as from the settings file it can see Nexus and knows to index artifact information for lookup from there. (Though I have had to tweak some of those settings to have it fully index our internal snapshots.) I'd also recommend you deploy a source jar with your artifacts as a standard practice if you're interested in doing this. We configure that in a super POM.
UPDATE2: I've come across this Sonatype whitepaper which details different stages of adoption/maturity, each with different usage goals for a Maven Repository manager.
Source: (StackOverflow)
I was reading about refactoring a large slow SQL Query over here, and the current highest response is from Mitch Wheat, who wants to make sure the query uses indexes for the major selects, and mentions:
First thing I would do is check to make sure there is an active index maintenance job being run periodically. If not, get all existing indexs rebuilt or if not possible at least get statistics updated.
I'm only am amateur DBA, and I've made a few programs freelance that are basically Java desktop clients and occasionally a MySQL backend. When I set up the system, I know to create an index on the columns that will be queried by, there's a varchar CaseID and a varchar CustName.
However, I set this system up months ago and left the client operating it, and I believe the indexes should grow as data is entered and I believe everything is still working nicely. I'm worried though that the indexes should be rebuilt periodically, because today i have read that there should be an 'active maintenance job'. The only maintenance job I set on the thing was a nightly backup.
I wanted to ask the community about regular maintenance that a database might require. Is it neccessary to rebuild indexes? Can I trust the MySQL backend to keep going so long as no one messes with it and the data stays under a few gigabytes?
Source: (StackOverflow)