博客
关于我
Linux —— 信号(3)
阅读量:806 次
发布时间:2023-02-01

本文共 8292 字,大约阅读时间需要 27 分钟。

Linux Signals —— Core Dump and Signal Handling

In this article, we'll explore the concept of core dump, which is a valuable tool for debugging and understanding program crashes. We'll also delve into signal handling, including signal delivery, pending signals, and signal blocking.

Core Dump

Core dump, literally translated as "core dump" or colloquially known as "spill core," is a term in computer science. It refers to a scenario where a program terminates abnormally due to errors, exceptions, or specific signals. In such cases, the operating system captures the program's memory state at the moment of termination and saves it into a file known as a core file. This core file is a precious resource for developers as it provides insights into why the program failed, especially in scenarios where the error is difficult to reproduce.

The core file typically includes:

  • Memory contents: A snapshot of the program's memory at the moment of the crash.
  • 寄存器状态: Details like the program counter, stack pointer, and other registers, which are crucial for reconstructing the execution flow.
  • 内存管理信息: Insights into how the program allocated and managed its memory.
  • 系统和处理器状态信息: Additional operational data related to the processor and OS.

These details are invaluable for debugging. Using tools like GDB (GNU Debugger), developers can load the core file and the corresponding executable file to analyze the crash state, identify the root cause, and fix the issue.

Enabling Core Dump

By default, core dump is usually disabled. This can be checked using the ulimit -a command. If we want core files to be generated, we can set the core file size temporarily with ulimit -c size in the current session. However, this setting is reset upon relogin.

Why Core Dump is Default Disabled

The reasons for core dump being disabled by default are as follows:

  • Security concerns: Core files may contain sensitive information, such as passwords, keys, or other critical data. This poses a security risk if the file is accessed by unintended parties. Disabling core dump helps mitigate this risk.

  • Disk space consumption: Core files can be large, especially for programs that use significant amounts of memory. Without proper limits, they can quickly consume disk space, affecting system performance and other processes.

  • Performance impact: Frequent core file generation can lead to performance degradation, particularly in high-load or resource-constrained environments, as it requires additional I/O operations to write the large files.

  • Conservative system policies: System administrators often prefer a cautious approach. Core files are typically enabled only when necessary, such as in development or debugging environments, to avoid unnecessary complications.

  • User awareness: Not all users are tech-savvy, and core files may be seen as unnecessary and space-occupying files by non-technical users. Disabling them by default avoids potential confusion.

  • Despite these reasons, core files are a powerful debugging tool. Developers and system administrators often configure core files to be generated in development and test environments, setting appropriate limits and storage locations to balance debugging convenience with system management needs.

    Handling Signals

    What's signal handling?

    Signal handling is a fundamental aspect of Unix-like systems like Linux. Signals are like intelligent interrupts that notify processes of specific events, such as hardware errors, software exceptions, or user actions (e.g., pressing Ctrl+C).

    Key Concepts

    • Signal delivery: The process of transferring a signal from the OS to the target process.
    • Pending signals: A signal that has been generated but is awaiting processing due to blocking or other reasons.
    • Signal blocking: A mechanism where a process can temporarily prevent certain signals from being delivered immediately.

    Signal Delivery

    Signal delivery consists of several key steps:

  • Signal generation: Signals can be produced by hardware issues (e.g., division by zero), software requests (e.g., using the kill command), user inputs (e.g., Ctrl+C), or other processes.

  • Signal masking: Before attempting to deliver a signal, the OS checks the process's signal mask. If the signal is blocked, it remains in a pending state until the process unblocks it.

  • Arrangement for delivery: If the signal is not blocked, the OS delivers it at an appropriate time, typically after the process completes an instruction.

  • Signal handling: Upon delivery, the process can take default actions (e.g., terminate the process) or define custom handler functions using tools like signal or sigaction.

  • Pending Signals

    Pending signals describe a scenario where one or more signals have been generated but are yet to be processed. This can occur due to:

    • Signal blocking: A process may block certain signals, causing them to remain pending until the process unblocks them.
    • Signal handling: If a process is handling another signal, new signals may remain pending until the current signal is processed.
    • Specific timing: Signals may remain pending during certain system calls that cannot be interrupted, ensuring system stability and data consistency.

    Signal Blocking

    Signal blocking allows a process to prevent immediate delivery of specific signals. This is achieved by setting a signal mask, which lists the signals that are blocked. Blocked signals remain pending until the process explicitly unblocks them.

    Understanding the Differences

    Understanding the difference between signal blocking and pending signals is crucial:

    • Blocking is active: It actively prevents signal delivery, keeping the signal in a pending state.
    • Pending is a state: Signals may be pending even if not explicitly blocked, depending on other factors like signal handling or system timing.

    Blocking and pending signals are complementary features that help manage how and when signals affect a process, ensuring both flexibility and stability in the system.

    Signal Related Functions

    In Linux system programming, the following functions are essential for managing and responding to received signals:

  • sigemptyset(), sigfillset(), sigaddset(), sigdelset(), sigismember(): These functions manipulate the signal set, adding or removing signals.

  • sigprocmask(): This function modifies the current process's signal mask, allowing you to block or unblock specific signals.

  • sigpending(): Queries the set of pending signals for the current process.

  • sigaction(): A more advanced function for defining custom signal handling behaviors.

  • These functions provide fine-grained control over signal handling, making it easier to manage asynchronous events and exceptions in your applications.

    A Simple Example of Signal Handling

    Consider the following code snippet:

    #include 
    #include
    #include
    int main() { signal(2, signal_handler); // Set up a custom signal handler for signal 2 sigset_t set; // Signal set for blocking signals sigprocmask(SIG_BLOCK, &set, 0); // Block signal 2 while (true) { std::cout << "Process is running... PID: " << getpid() << std::endl; sleep(1); }}

    This code blocks signal 2, allowing the process to continue execution without interruption. Signal handling is crucial for responsive and stable programs, especially in multi-threaded environments.

    Using Sigpending

    The sigpending function is used to check for pending signals. Here's an example:

    #include 
    #include
    #include
    void PrintPendingsignals(const sigset_t &pending) { for (int i = 1; i <= 31; ++i) { if (sigismember(&pending, i)) { std::cout << "1"; } else { std::cout << "0"; } } std::cout << "\n";}int main() { // Block signal 2 and send it signal(2, signal_handler); sigset_t set; sigprocmask(SIG_BLOCK, &set, 0); int cnt = 0; while (true) { sigpending(&set); PrintPendingsignals(set); if (cnt == 5) { sigprocmask(SIG_UNBLOCK, &set, 0); kill(getpid(), 2); } cnt++; sleep(1); }}

    This code demonstrates how to use sigpending to check for pending signals and how to manage signal blocking and unblocking. By understanding and effectively handling signals, developers can create more robust and reliable applications.

    转载地址:http://rswfk.baihongyu.com/

    你可能感兴趣的文章
    nginx配置域名和ip同时访问、开放多端口
    查看>>
    Nginx配置多个不同端口服务共用80端口
    查看>>
    Nginx配置好ssl,但$_SERVER[‘HTTPS‘]取不到值
    查看>>
    Nginx配置如何一键生成
    查看>>
    Nginx配置实例-负载均衡实例:平均访问多台服务器
    查看>>
    Nginx配置文件nginx.conf中文详解(总结)
    查看>>
    Nginx配置负载均衡到后台网关集群
    查看>>
    ngrok | 内网穿透,支持 HTTPS、国内访问、静态域名
    查看>>
    NHibernate学习[1]
    查看>>
    NHibernate异常:No persister for的解决办法
    查看>>
    NIFI1.21.0_Mysql到Mysql增量CDC同步中_日期类型_以及null数据同步处理补充---大数据之Nifi工作笔记0057
    查看>>
    NIFI1.21.0_NIFI和hadoop蹦了_200G集群磁盘又满了_Jps看不到进程了_Unable to write in /tmp. Aborting----大数据之Nifi工作笔记0052
    查看>>
    NIFI1.21.0通过Postgresql11的CDC逻辑复制槽实现_指定表多表增量同步_增删改数据分发及删除数据实时同步_通过分页解决变更记录过大问题_02----大数据之Nifi工作笔记0054
    查看>>
    NIFI1.21.0通过Postgresql11的CDC逻辑复制槽实现_指定表或全表增量同步_实现指定整库同步_或指定数据表同步配置_04---大数据之Nifi工作笔记0056
    查看>>
    NIFI1.23.2_最新版_性能优化通用_技巧积累_使用NIFI表达式过滤表_随时更新---大数据之Nifi工作笔记0063
    查看>>
    NIFI从MySql中增量同步数据_通过Mysql的binlog功能_实时同步mysql数据_根据binlog实现数据实时delete同步_实际操作04---大数据之Nifi工作笔记0043
    查看>>
    NIFI从MySql中增量同步数据_通过Mysql的binlog功能_实时同步mysql数据_配置binlog_使用处理器抓取binlog数据_实际操作01---大数据之Nifi工作笔记0040
    查看>>
    NIFI从MySql中增量同步数据_通过Mysql的binlog功能_实时同步mysql数据_配置数据路由_实现数据插入数据到目标数据库_实际操作03---大数据之Nifi工作笔记0042
    查看>>
    NIFI从MySql中离线读取数据再导入到MySql中_03_来吧用NIFI实现_数据分页获取功能---大数据之Nifi工作笔记0038
    查看>>
    NIFI从MySql中离线读取数据再导入到MySql中_无分页功能_02_转换数据_分割数据_提取JSON数据_替换拼接SQL_添加分页---大数据之Nifi工作笔记0037
    查看>>