欢迎访问移动开发之家(rcyd.net),关注移动开发教程。移动开发之家  移动开发问答|  每日更新
页面位置 : > > > 内容正文

iOS开发探索多线程GCD常用函数,

来源: 开发者 投稿于  被查看 10100 次 评论:122

iOS开发探索多线程GCD常用函数,


目录
  • 正文
    • 单例
    • 栅栏函数
    • 调度组 dispatch_group_t
    • 信号量 dispatch_semaphore_t
    • dispatch_source
  • 总结

    正文

    前篇文章我们了解了GCD的任务的原理,接下来我们在探索一下GCD中我们开发常用的函数

    单例

    下面我们从源码中看一下我们创建单例的时候使用的dispatch_once,都做了什么,是通过什么操作保证全局唯一的

    void dispatch_once(dispatch_once_t *val, dispatch_block_t block) {
        dispatch_once_f(val, block, _dispatch_Block_invoke(block));
    }
    void dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func) {
        dispatch_once_gate_t l = (dispatch_once_gate_t)val;
        #if !DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
            uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
            if (likely(v == DLOCK_ONCE_DONE)) {
                return;
            }
            #if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
                if (likely(DISPATCH_ONCE_IS_GEN(v))) {
                    return _dispatch_once_mark_done_if_quiesced(l, v);
                }
            #endif
        #endif
        if (_dispatch_once_gate_tryenter(l)) {
            return _dispatch_once_callout(l, ctxt, func);
        }
        return _dispatch_once_wait(l);
    }
    // 原子属性,比较 + 交换 &l->dgo_once 是否等于 DLOCK_ONCE_UNLOCKED,相等还没有执行过,将_dispatch_lock_value_for_self() 赋值给 &l->dgo_once,返回true;不等返回false
    #define os_atomic_cmpxchg(p, e, v, m) \
        ({ _os_atomic_basetypeof(p) _r = (e); \
        atomic_compare_exchange_strong_explicit(_os_atomic_c11_atomic(p), \
        &_r, v, memory_order_##m, memory_order_relaxed); })
    static inline bool _dispatch_once_gate_tryenter(dispatch_once_gate_t l) {
        return os_atomic_cmpxchg(&l->dgo_once, DLOCK_ONCE_UNLOCKED, (uintptr_t)_dispatch_lock_value_for_self(), relaxed);
    }
    

    和任务一样,对block进行了封装,同时通过判断dispatch_once_t *val的状态进行判断,DLOCK_ONCE_DONE直接返回,也就是已经执行了直接返回,调用_dispatch_once_gate_tryenter判断是否执行过block,未执行_dispatch_once_callout进行执行代码,执行过调用_dispatch_once_wait等待执行结束。

    static void _dispatch_once_callout(dispatch_once_gate_t l, void *ctxt, dispatch_function_t func) {
        _dispatch_client_callout(ctxt, func); // 执行函数
        _dispatch_once_gate_broadcast(l); // 标记执行完成
    }
    static inline void _dispatch_once_gate_broadcast(dispatch_once_gate_t l) {
        dispatch_lock value_self = _dispatch_lock_value_for_self();
        uintptr_t v;
        #if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
            v = _dispatch_once_mark_quiescing(l);
        #else
            v = _dispatch_once_mark_done(l); // 将 &l->dgo_once 设置成 DLOCK_ONCE_DONE,标记函数已经执行完成了,广播的作用
        #endif
        if (likely((dispatch_lock)v == value_self)) return;
        _dispatch_gate_broadcast_slow(&l->dgo_gate, (dispatch_lock)v);
    }
    void _dispatch_once_wait(dispatch_once_gate_t dgo) {
        ...省略部分...
        for (;;) {
            // 从底层获取 &dgo->dgo_onc 的状态
            os_atomic_rmw_loop(&dgo->dgo_once, old_v, new_v, relaxed, {
                if (likely(old_v == DLOCK_ONCE_DONE)) {
                    os_atomic_rmw_loop_give_up(return); // 退出循环
                }
                #if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
                if (DISPATCH_ONCE_IS_GEN(old_v)) {
                    os_atomic_rmw_loop_give_up({
                        os_atomic_thread_fence(acquire);
                        return _dispatch_once_mark_done_if_quiesced(dgo, old_v);
                    });
                }
                #endif
                new_v = old_v | (uintptr_t)DLOCK_WAITERS_BIT;
                if (new_v == old_v) os_atomic_rmw_loop_give_up(break);
            });
            ...省略部分...
        }
    }
    

    _dispatch_once_wait中使用死循环,直到DLOCK_ONCE_DONE时调用os_atomic_rmw_loop_give_up(return);退出循环,通过这种方式保证只创建一份.

    栅栏函数

    栅栏函数:前面的任务没有执行完成时,不执行栅栏函数中的任务,栅栏函数后面的任务需等待栅栏函数中的任务执行完成才能执行。

    • 栅栏函数只能拦截同一队列中的任务
    • 栅栏函数无法拦截全局队列,因为系统操作也会使用全局队列
    • 拦截同步队列和普通的任务原理相同 栅栏函数分为同步函数和异步函数,我们先看一下同步函数的源码dispatch_barrier_sync
    void dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work) {
        ...省略...
        _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags); // 和之前一样的封装block
    }
    _dispatch_barrier_sync_f -> _dispatch_barrier_sync_f_inline
    static inline void _dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) {
        dispatch_tid tid = _dispatch_tid_self();
        dispatch_lane_t dl = upcast(dq)._dl;
        if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
            return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl, DC_FLAG_BARRIER | dc_flags);
        }
        if (unlikely(dl->do_targetq->do_targetq)) {
            return _dispatch_sync_recurse(dl, ctxt, func, DC_FLAG_BARRIER | dc_flags);
        }
        _dispatch_introspection_sync_begin(dl);
        _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
    }
    

    在_dispatch_barrier_sync_f_inline函数中有3个执行func的函数, 我们将这三个函数使用符号断点来查看其调用的是哪个函数,发现调用的是_dispatch_sync_f_slow

    static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) {
        dispatch_queue_t top_dq = top_dqu._dq;
        dispatch_queue_t dq = dqu._dq;
        if (unlikely(!dq->do_targetq)) {
            return _dispatch_sync_function_invoke(dq, ctxt, func);
        }
        ...省略部分...
        // 等待前面的任务执行完成
        _dispatch_trace_item_push(top_dq, &dsc);
        __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);
        if (dsc.dsc_func == NULL) {
            // dsc_func being cleared means that the block ran on another thread ie.
            // case (2) as listed in _dispatch_async_and_wait_f_slow.
            dispatch_queue_t stop_dq = dsc.dc_other;
            return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
        }
        _dispatch_introspection_sync_begin(top_dq);
        _dispatch_trace_item_pop(top_dq, &dsc);
        _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
    }
    

    继续查看_dispatch_sync_f_slow的源码, 内部执行func的函数有两个,也加入符号断点查看,最后得到调用的是_dispatch_sync_invoke_and_complete_recurse, 该方法就是同步任务调用的执行block任务的函数, 该函数内部有个_dispatch_sync_complete_recurse函数

    static void _dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq, uintptr_t dc_flags){
    bool barrier = (dc_flags & DC_FLAG_BARRIER);
        do {
            if (dq == stop_dq) return;
            if (barrier) {
                dx_wakeup(dq, 0, DISPATCH_WAKEUP_BARRIER_COMPLETE);
            } else {
                _dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0);
            }
            dq = dq->do_targetq;
            barrier = (dq->dq_width == 1);
        } while (unlikely(dq->do_targetq));
    }
    

    该函数是使用do while,内部使用barrier来判断是否使用栅栏函数,没有栅栏函数就调用队列中接下来的任务,有栅栏函数就调用dx_wakeup

    #define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z)
    

    dq_wakeup和之前异步任务的dq_push一样,是针对不同队列调用不同的方法

    DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane,
        .do_type        = DISPATCH_QUEUE_SERIAL_TYPE,
        ......
        .dq_wakeup        = _dispatch_lane_wakeup,
    );
    DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
        .do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,
        ......
        .dq_wakeup        = _dispatch_lane_wakeup,
    );
    DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
        .do_type        = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
        ......
        .dq_wakeup        = _dispatch_root_queue_wakeup,
    );
    DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_main, lane,
        .do_type        = DISPATCH_QUEUE_MAIN_TYPE,
        ......
        .dq_wakeup        = _dispatch_main_queue_wakeup,
    );
    
    // 串行队列及自己创建的并发队列
    void _dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos, dispatch_wakeup_flags_t flags){
    dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;
        if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
            // 栅栏函数任务完成后调用该函数
            return _dispatch_lane_barrier_complete(dqu, qos, flags);
        }
        if (_dispatch_queue_class_probe(dqu)) {
            target = DISPATCH_QUEUE_WAKEUP_TARGET;
        }
        // 唤醒后面队列中的任务,执行栅栏函数后面队列里的任务
        return _dispatch_queue_wakeup(dqu, qos, flags, target);
    }
    // 全局并发队列 没有任何关于栅栏函数的操作,所以栅栏函数对全局并发队列无效
    void _dispatch_root_queue_wakeup(dispatch_queue_global_t dq, DISPATCH_UNUSED dispatch_qos_t qos, dispatch_wakeup_flags_t flags){
        if (!(flags & DISPATCH_WAKEUP_BLOCK_WAIT)) {
            DISPATCH_INTERNAL_CRASH(dq->dq_priority, "Don't try to wake up or override a root queue");
        }
        if (flags & DISPATCH_WAKEUP_CONSUME_2) {
            return _dispatch_release_2_tailcall(dq);
        }
    }
    

    我们可以看到全局并发队列的dq_wakeup关联的函数中没有任何关于栅栏函数的操作,所以栅栏函数对全局并发队列无效.

    可是栅栏函数到底是怎么拦截的呢?

    我们还是用全局符号断点来进行查看,将栅栏函数前面的任务进行延时操作,我们运行发现,调用_dispatch_sync_f_slow函数后,并没有立即调用_dispatch_sync_invoke_and_complete_recurse,而是等到我们前面的延时操作结束后,才进行的_dispatch_sync_invoke_and_complete_recurse调用, 也就是说在_dispatch_sync_f_slow函数调用后,会等待前面的函数全部执行完成后,才会执行栅栏函数本身的任务及唤醒栅栏函数后面的任务.

    那为什么栅栏函数还区分同步和异步函数呢?

    其实就是栅栏函数本身的任务是否需要开辟线程去进行执行来区分使用同步还是异步函数

    调度组 dispatch_group_t

    栅栏函数有自身的局限性,他只能拦截同一队列中的任务,当有多个队列的任务时,无法生效,这是我们应该使用调度组dispatch_group_t来进行任务拦截,保障任务的执行顺序. dispatch_group_t有两种书写方式:

    dispatch_group_async(g, dispatch_get_global_queue(0, 0), ^{});

    dispatch_group_enter(g); + dispatch_group_leave(g);

    这两种书写方式,效果是一模一样的,dispatch_group_async就是dispatch_group_enter和dispatch_group_leave的整合

    void dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq, dispatch_block_t db){
        dispatch_continuation_t dc = _dispatch_continuation_alloc();
        uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
        dispatch_qos_t qos;
        qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
        _dispatch_continuation_group_async(dg, dq, dc, qos);
    }
    static inline void _dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dc, dispatch_qos_t qos) {
        dispatch_group_enter(dg);
        dc->dc_data = dg;
        _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
    }
    static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) {
        #if DISPATCH_INTROSPECTION
            if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
                _dispatch_trace_item_push(dqu, dc);
            }
        #else
            (void)dc_flags;
        #endif
        return dx_push(dqu._dq, dc, qos);
    }
    

    dispatch_group_async其内部先进行dispatch_group_enter,然后和异步函数一样进行了dx_push调用来执行任务,和我们之前的异步任务是一样的流程,在_dispatch_root_queues_init_once-...->_dispatch_continuation_invoke_inline文件里我们可以看到dispatch_group_leave的调用

    static inline void _dispatch_continuation_invoke_inline(dispatch_object_t dou, dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu) {
        ...省略部分...
        if (unlikely(dc_flags & DC_FLAG_GROUP_ASYNC)) {
            _dispatch_continuation_with_group_invoke(dc);
        }
        ...省略部分...
    }
    static inline void _dispatch_continuation_with_group_invoke(dispatch_continuation_t dc) {
        struct dispatch_object_s *dou = dc->dc_data;
        unsigned long type = dx_type(dou);
        if (type == DISPATCH_GROUP_TYPE) {
            _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
            _dispatch_trace_item_complete(dc);
            dispatch_group_leave((dispatch_group_t)dou);
        } else {
            DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
        }
    }
    

    下面我们查看一下dispatch_group_enter、dispatch_group_leave的源码

    void dispatch_group_enter(dispatch_group_t dg) {
        // The value is decremented on a 32bits wide atomic so that the carry
        // for the 0 -> -1 transition is not propagated to the upper 32bits.
        uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits, DISPATCH_GROUP_VALUE_INTERVAL, acquire);
        uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
        if (unlikely(old_value == 0)) {
            _dispatch_retain(dg); // <rdar://problem/22318411>
        }
        if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
            DISPATCH_CLIENT_CRASH(old_bits,
            "Too many nested calls to dispatch_group_enter()");
        }
    }
    void dispatch_group_leave(dispatch_group_t dg) {
        // The value is incremented on a 64bits wide atomic so that the carry for
        // the -1 -> 0 transition increments the generation atomically.
        // 修改 dg_state
        uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state, DISPATCH_GROUP_VALUE_INTERVAL, release);
        uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);
        if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
            old_state += DISPATCH_GROUP_VALUE_INTERVAL;
            do {
                new_state = old_state;
                if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
                    new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
                    new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
                } else {
                    // If the group was entered again since the atomic_add above,
                    // we can't clear the waiters bit anymore as we don't know for
                    // which generation the waiters are for
                    new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
                }
                if (old_state == new_state) break;
            } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,  old_state, new_state, &old_state, relaxed)));
            return _dispatch_group_wake(dg, old_state, true);
        }
        if (unlikely(old_value == 0)) {
            DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
            "Unbalanced call to dispatch_group_leave()");
        }
    }
    

    从源码中我们可以看到dispatch_group_enter进行减1操作,增加组内当前未完成任务的引用计数; dispatch_group_leave进行加1操作,减少组内未完成任务的引用计数;

    当引用计数变成0,就会调用dispatch_group_notify来执行后续代码.

    static inline void _dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dsn) {
        uint64_t old_state, new_state;
        dispatch_continuation_t prev;
        dsn->dc_data = dq;
        _dispatch_retain(dq);
        prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
        if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
        os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
        if (os_mpsc_push_was_empty(prev)) {
            // 监听 dg_state
            os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
                new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
                if ((uint32_t)old_state == 0) {
                    os_atomic_rmw_loop_give_up({
                        return _dispatch_group_wake(dg, new_state, false);
                    });
                }
            });
        }
    }
    

    我们可以看到dispatch_group_notify中会判断old_state == 0才会执行后续代码,如果dispatch_group_enter和dispatch_group_leave不是一一对应的则永远不会执行dispatch_group_notify,而且dispatch_group_leave中对old_value == 0,也进行了crash判断,dispatch_group_leave比dispatch_group_enter多的话会直接crash.

    信号量 dispatch_semaphore_t

    GCD中另一种常用的控制任务执行顺序的就是信号量,其主要是控制并发数量

    • 通过dispatch_semaphore_create(0)来创建信号量并指定信号的大小
    • dispatch_semaphore_signal(sem), 发送信号量,将信号量的值 +1
    • dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER) 等待信号量, 将信号量的值 -1, 当信号值的值小于0时会阻塞线程一直进行等待,当信号值的值大于等于0时执行后续代码

    接下来我们还是查看源码来看看信号量的实现原理

    intptr_t dispatch_semaphore_signal(dispatch_semaphore_t dsema) {
        // 对信号量的值 + 1
        long value = os_atomic_inc2o(dsema, dsema_value, release);
        if (likely(value > 0)) {
            return 0;
        }
        if (unlikely(value == LONG_MIN)) {
            DISPATCH_CLIENT_CRASH(value, "Unbalanced call to dispatch_semaphore_signal()");
        }
        return _dispatch_semaphore_signal_slow(dsema);
    }
    intptr_t _dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema) {
        _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
        _dispatch_sema4_signal(&dsema->dsema_sema, 1);
        return 1;
    }
    

    我们可以看到signal就只有+1操作,下面我们主要看一下wait是如何进行等待的

    intptr_t dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout) {
        // 对信号量进行减1操作
        long value = os_atomic_dec2o(dsema, dsema_value, acquire);
        // 当信号量的值 >= 0直接返回,执行后续代码
        if (likely(value >= 0)) {
            return 0;
        }
        return _dispatch_semaphore_wait_slow(dsema, timeout);
    }
    static intptr_t _dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout){
        long orig;
        _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
        switch (timeout) {
            default:
                if (!_dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) {
                    break;
                }
            // Fall through and try to undo what the fast path did to
            // dsema->dsema_value
            case DISPATCH_TIME_NOW:
                orig = dsema->dsema_value;
                while (orig < 0) {
                    if (os_atomic_cmpxchgv2o(dsema, dsema_value, orig, orig + 1, &orig, relaxed)) {
                        return _DSEMA4_TIMEOUT();
                    }
                }
            // Another thread called semaphore_signal().
            // Fall through and drain the wakeup.
            case DISPATCH_TIME_FOREVER:
                _dispatch_sema4_wait(&dsema->dsema_sema);
                break;
        }
        return 0;
    }
    void _dispatch_sema4_wait(_dispatch_sema4_t *sema) {
        int ret = 0;
        do {
            ret = sem_wait(sema);
        } while (ret == -1 && errno == EINTR);
        DISPATCH_SEMAPHORE_VERIFY_RET(ret);
    }
    

    我们可以看到wait进行等待的函数 是根据我们传递的timeout进行的判断,然后进行do while循环阻塞当前线程获取信号量的值,如果信号量>=0跳出循环,执行后续代码.

    • 需要注意的是dispatch_semaphore_signal的数量不能比dispatch_semaphore_wait少,否则信号量内存无法被释放,会导致程序崩溃,_dispatch_semaphore_dispose函数中会进行判断当前的信号量的值 < 原始信号量的值 会触发崩溃.
    void _dispatch_semaphore_dispose(dispatch_object_t dou, DISPATCH_UNUSED bool *allow_free) {
        dispatch_semaphore_t dsema = dou._dsema;
        if (dsema->dsema_value < dsema->dsema_orig) {
            DISPATCH_CLIENT_CRASH(dsema->dsema_orig - dsema->dsema_value, "Semaphore object deallocated while in use");
        }
        _dispatch_sema4_dispose(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    }
    

    dispatch_source

    dispatch_source 和runloop的source相同, 用于监听事件的,比如计时器,系统内存压力,mach port 待处理消息等.

    dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, dispatch_get_global_queue(0, 0)) 创建事件源

    dispatch_source_set_event_handler(source, ^{}) 设置源事件回调

    dispatch_source_merge_data(source, 1) 设置源事件数据

    dispatch_source_get_data(source) 获取源事件数据

    dispatch_resume(source); 启动/继续

    dispatch_suspend(source); 挂起,暂停

    dispatch_source_cancel(source); 异步取消源事件

    dispatch_cancel(source); 取消源事件

    总结

    栅栏函数拦截同一队列中的任务,无法拦截全局队列,因为系统操作也会使用全局队列,拦截同步队列就是普通的任务原理相同

    调度组是通过未完成任务的引用计数来控制组里面任务的执行顺序

    信号量其主要是控制并发数量,

    以上就是iOS开发探索多线程GCD常用函数的详细内容,更多关于iOS开发多线程GCD函数的资料请关注3672js教程其它相关文章!

    您可能感兴趣的文章:
    • iOS开发探索多线程GCD任务示例详解
    • iOS开发探索多线程GCD队列示例详解
    • IOS开发之多线程NSThiread GCD NSOperation Runloop
    • iOS 多线程总结之GCD的使用详解

    用户评论