Core Principles of Computation: Complexity, Automata, and Algorithms

Posted by Anonymous and classified in Computers

Written on in English with a size of 8.84 KB

1.) What is computational complexity theory, and why is it important? It studies how efficiently problems can be solved using algorithms. 2.) Explain the difference between time complexity and space complexity. Time complexity measures how the runtime of an algorithm grows with input size, while space complexity measures how much memory an algorithm uses as input size grows. 3.) What are P and NP classes in complexity theory? P contains problems that can be solved quickly (in polynomial time), while NP contains problems whose solutions can be verified quickly. 4.) What does it mean when a problem is NP-complete? It means the problem is one of the hardest in NP; solving one NP-complete problem quickly means all NP problems can be solved quickly. 5.) Define decidability in computability theory. A problem is decidable if there’s an algorithm that can solve it for all possible inputs. 6.) What is the Halting Problem, and why is it significant? It asks whether a program will eventually stop or run forever, and it’s important because it’s undecidable. 7.) How does reducibility relate to problem-solving in computation? It means one problem can be transformed into another to compare their difficulty. 8.) What is the significance of Turing reducibility? It shows that Problem A can be solved using a solution to Problem B. 9.) How does one problem being Turing reducible to another help in understanding their relative complexities? It means that Problem A is no harder than Problem B. 10.) Explain the concept of “undecidable problems” with an example. These are problems no algorithm can solve for all inputs; the Halting Problem is a classic example. 11.) What is a finite automaton? A simple machine that accepts or rejects strings based on a set of rules. 12.) Differentiate between deterministic and nondeterministic finite automata. A DFA follows exactly one path for each input, while an NFA can have multiple or no paths. 13.) What are the components of a finite automaton? States, input alphabet, transition function, start state, and accept states. 14.) How are regular languages recognized using finite automata? A language is regular if a finite automaton can recognize (accept) it. 15.) What is the role of a transition table in finite automata? It shows how the automaton moves from one state to another based on input. 16.) What are the limitations of finite automata? They can’t recognize languages that require memory, like matching nested parentheses. 17.) What is a regular expression and how is it related to finite automata? A regular expression defines a regular language, which can also be recognized by a finite automaton. 18.) Explain the concept of ε-transitions in NFAs. These are transitions that allow the automaton to change states without consuming any input. 19.) What is the pumping lemma, and how is it used to prove a language is not regular? It’s a tool used to show that if a language can’t be “pumped” (repeated) in a certain way, then it’s not regular. 20.) How does a DFA accept or reject a string? A DFA accepts a string if it ends in an accept state after reading all input; otherwise, it rejects it. 21.) Define context-free grammar and provide its components. A CFG is a set of rules that generate strings in a language; it consists of variables, terminals, a start variable, and production rules. 22.) How are context-free languages recognized by pushdown automata? They’re recognized using a stack, which allows the automaton to keep track of nested structures. 23.) What is the difference between regular and context-free languages? Regular languages don’t require memory, while context-free languages can handle nested structures using a stack. 24.) What is leftmost and rightmost derivation in CFGs? Leftmost derivation always replaces the leftmost variable first; rightmost replaces the rightmost. 25.) Explain ambiguity in context-free grammars with an example. A CFG is ambiguous if there’s more than one parse tree for the same string, like with the expression “a + b + c”. 26.) What is Chomsky Normal Form, and why is it useful? It’s a way of writing CFGs where rules have a strict format, making parsing algorithms easier to apply. 27.) How does the CYK algorithm work in parsing? It checks whether a string can be generated by a CFG in Chomsky Normal Form using dynamic programming. 28.) What are applications of context-free languages in computer science? They’re used in programming languages, compilers, and interpreters to define syntax. 29.) What is the significance of parse trees in CFG? They show how a string is derived using the grammar and help visualize the structure. 30.) How can ambiguity in grammars affect programming languages? It can lead to confusion or incorrect interpretation of code by the compiler. 31.) State Kleene’s Second Recursion Theorem. For any computable function, there exists a program that can access and use its own description as input. 32.) What is the intuition behind Kleene's Second Recursion Theorem? It shows that programs can refer to and manipulate themselves—like self-replicating or self-modifying code. 33.) How does the theorem relate to self-replicating programs? It guarantees the existence of such programs by proving they can access their own code. 34.) What are the implications of the recursion theorem in programming languages? It proves that self-referential behavior is computable and can be used in things like compilers and malware. 35.) How does the theorem demonstrate limits of computability? It shows that some self-referential behaviors are possible, but not all problems can be decided or computed—even with that power. 36.) What is a maximal margin classifier? A model that separates classes by the widest possible gap or margin. 37.) How does a maximal margin classifier separate data points? It draws a boundary (hyperplane) that maximizes the distance from the closest data points of each class. 38.) Why is maximizing the margin important in classification? It improves generalization and reduces overfitting on new data. 39.) What are support vectors in the context of maximal margin classifiers? The data points that lie closest to the decision boundary and define the margin. 40.) How does the maximal margin classifier relate to support vector machines (SVMs)? SVMs use the concept of the maximal margin to classify data using support vectors. 41). What is the concept of bootstrapping in learning? A resampling method that builds new datasets by sampling with replacement from the original data. 42.) How does bootstrapping improve model performance? It reduces variance and improves stability by averaging over multiple models. 43.) What is the difference between bootstrapping and bagging? Bagging uses bootstrapping to create different training sets and then combines the models, usually by voting or averaging. 44.) How can bootstrapping be used in analyzing computational models? It helps estimate accuracy, confidence intervals, and model robustness without needing more data. 45.) What are some limitations of bootstrapping in practice? It can be computationally expensive and might not work well on small or non-representative datasets. 46.) How does theoretical computer science contribute to real-world applications? It provides the foundation for algorithms, security, programming languages, and system design used in real software and hardware. 47.) Why is understanding formal languages important in software development? It helps in writing compilers, interpreters, and ensuring that programs follow correct syntax and structure. 48.) How does automata theory help in compiler design? It helps design lexical analyzers and parsers that process source code correctly. 49.) What role does recursion play in algorithm design? Recursion allows solving complex problems by breaking them into smaller, self-similar subproblems. 50.) How do concepts from this course apply to programming or software testing? They help in building efficient code, verifying correctness, analyzing performance, and understanding limitations of what can be computed.

Related entries: