binary interview questions
Top binary frequently asked interview questions
Similar to how you can define an integer constant in hexadecimal or octal, can I do it in binary?
I admit this is a really easy (and stupid) question. My google searches are coming up empty.
Source: (StackOverflow)
8 bits representing the number 7 look like this:
00000111
Three bits are set.
What are algorithms to determine the number of set bits in a 32-bit integer?
Source: (StackOverflow)
I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post.
This is my mindset about different interfaces:
TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set.
Interface: It is a "existing entity" layer between the
functionality
and consumer
of that
functionality. An, interface by itself
is doesn't do anything. It just
invokes the functionality lying
behind.
Now depending on who the user is there
are different type of interfaces.
Command Line Interface(CLI) commands are the existing entities,
consumer is the user and functionality
lies behind.
functionality:
my software
functionality which solves some
purpose to which we are describing
this interface.
existing entities:
commands
consumer:
user
Graphical User Interface(GUI) window,buttons etc.. are the existing
entities, again consumer is the user
and functionality lies behind.
functionality:
my software
functionality which solves some
purpose to which we are describing
this interface.
existing entities:
window,buttons
etc..
consumer:
user
Application Programming Interface(API) functions or to be
more correct, interfaces (in
interfaced based programming) are the
existing entities, consumer here is
another program not a user. and again
functionality lies behind this layer.
functionality:
my software
functionality which solves some
purpose to which we are describing
this interface.
existing entities:
functions,
Interfaces(array of functions).
consumer:
another
program/application.
Application Binary Interface (ABI) Here is my problem starts.
functionality:
???
existing entities:
???
consumer:
???
- I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI.
Wikipedia says:
ABIs cover details such as
- data type, size, and alignment;
- the calling convention, which controls how functions' arguments are
passed and return values retrieved;
- the system call numbers and how an application should make system calls
to the operating system;
Other ABIs standardize details such as
- the C++ name mangling,[2] .
- exception propagation,[3] and
- calling convention between compilers on the same platform, but do
not require cross-platform
compatibility.
Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside.
Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between?
anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense.
Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface
according to the definition.
I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific?
Where can I find MS Window's ABI?
So, these are the major queries that are bugging me.
Source: (StackOverflow)
How do you express an integer as a binary number with Python literals?
I was easily able to find the answer for hex:
>>> 0x12AF
4783
>>> 0x100
256
and, octal:
>>> 01267
695
>>> 0100
64
How do you use literals to express binary in Python?
Summary of Answers
- Python 2.5 and earlier: can express binary using
int('01010101111',2)
but not with a literal.
- Python 2.5 and earlier: there is no way to express binary literals.
- Python 2.6 beta: You can do like so:
0b1100111
or 0B1100111
.
- Python 2.6 beta: will also allow
0o27
or 0O27
(second character is the letter O) to represent an octal.
- Python 3.0 beta: Same as 2.6, but will no longer allow the older
027
syntax for octals.
Source: (StackOverflow)
I'm in a computer systems course and have been struggling, in part, with Two's Complement. I want to understand it but everything I've read hasn't brought the picture together for me. I've read the wikipedia article and various other articles, including my text book.
Hence, I wanted to start this community wiki post to define what Two's Complement is, how to use it and how it can affect numbers during operations like casts (from signed to unsigned and vice versa), bit-wise operations and bit-shift operations.
What I'm hoping for is a clear and concise definition that is easily understood by a programmer who does not hold a PhD (or even a B.S.) in Computer Science. (I have more of a software engineering B.S. and am pursuing a M.S. in Software Engineering).
Source: (StackOverflow)
I need to work with a binary number.
I tried writing:
const x = 00010000;
But it didn't work.
I know that I can use an hexadecimal number that has the same value as 00010000
, but I want to know if there is a type in C++ for binary numbers and if there isn't, is there another solution for my problem?
Source: (StackOverflow)
I have a byte array with a ~known binary sequence in it. I need to confirm that the binary sequence is what it's supposed to be. I have tried '.equals' in addition to '==', but neither worked.
byte[] array = new BigInteger("1111000011110001", 2).toByteArray();
if (new BigInteger("1111000011110001", 2).toByteArray() == array){
System.out.println("the same");
}else{
System.out.println("different'");
}
Source: (StackOverflow)
What would be the best way (ideally, simplest) to convert an int to a binary string representation in Java?
For example, say the int is 156. The binary string representation of this would be "10011100".
Source: (StackOverflow)
Wikipedia says
Base64 encoding schemes are commonly used when there is a need to encode binary data that needs be stored and transferred over media that are designed to deal with textual data. This is to ensure that the data remains intact without modification during transport.
But is it not that data is always stored/transmitted in binary because the memory that our machines have store binary and it just depends how you interpret it? So, whether you encode the bit pattern 010011010110000101101110
as Man
in ASCII or as TWFu
in Base64, you are eventually going to store the same bit pattern.
If the ultimate encoding is in terms of zeros and ones and every machine and media can deal with them, how does it matter if the data is represented as ASCII or Base64?
What does it mean "media that are designed to deal with textual data"? They can deal with binary => they can deal with anything.
Thanks everyone, I think I understand now.
When we send over data, we cannot be sure that the data would be interpreted in the same format as we intended it to be. So, we send over data coded in some format (like Base64) that both parties understand. That way even if sender and receiver interpret same things differently, but because they agree on the coded format, the data will not get interpreted wrongly.
From Mark Byers example
If I want to send
Hello
world!
One way is to send it in ASCII like
72 101 108 108 111 10 119 111 114 108 100 33
But byte 10 might not be interpreted correctly as a newline at the other end. So, we use a subset of ASCII to encode it like this
83 71 86 115 98 71 56 115 67 110 100 118 99 109 120 107 73 61 61
which at the cost of more data transferred for the same amount of information ensures that the receiver can decode the data in the intended way, even if the receiver happens to have different interpretations for the rest of the character set.
Source: (StackOverflow)
Consider this code:
x = 1 # 0001
x << 2 # Shift left 2 bits: 0100
# Result: 4
x | 2 # Bitwise OR: 0011
# Result: 3
x & 1 # Bitwise AND: 0001
# Result: 1
I can understand the arithmetic operators in Python (and other languages), but I never understood 'bitwise' operators quite well. In the above example (from a Python book), I understand the left-shift but not the other two.
Also, what are bitwise operators actually used for? I'd appreciate some examples.
Source: (StackOverflow)
For one and a half years, I have been keeping my eyes on the git community in hopes of making the switch away from SVN. One particular issue holding me back is the inability to lock binary files. Throughout the past year I have yet to see developments on this issue. I understand that locking files goes against the fundamental principles of distributed source control, but I don't see how a web development company can take advantage of git to track source code and image file changes when there is the potential for binary file conflicts.
To achieve the effects of locking, a "central" repository must be identified. Regardless of the distributed nature of git, most companies will have a "central" repository for a software project. We should be able to mark a file as requiring a lock from the governing git repository at a specified address. Perhaps this is made difficult because git tracks file contents not files?
Do any of you have experience in dealing with git and binary files that should be locked before modification?
NOTE: It looks like Source Gear's new open source distributed version control project, Veracity, has locking as one of its goals.
Source: (StackOverflow)
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercices to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercices but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a
, b
and c
.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
Thank you for your help (I couldn't find a question with a similar topic with the keywords I know so I am sorry if this is some sort of duplicate).
Also, I didn't really know which tags to pick to feel free to change them accordingly.
Source: (StackOverflow)
Why should I use a human readable file format in preference to a binary one? Is there ever a situation when this isn't the case?
EDIT:
I did have this as an explanation when initially posting the question, but it's not so relevant now:
When answering this question I wanted to refer the asker to a standard SO answer on why using a human readable file format is a good idea. Then I searched for one and couldn't find one. So here's the question
Source: (StackOverflow)